What is the meaning of the core

Multi-core / multi-core processors

Multi-core means that several processor cores are built into one processor. These processors are called multi-core or multi-core processors. Outwardly, multi-core CPUs do not differ from single-core CPUs. The calculation kernel is simply available several times in multi-core CPUs. Within the operating system, the multi-core processor is treated like several processing units.

Depending on the number of cores, there are modified names that indicate how many cores are integrated in the processor.

In the meantime there is a move towards increasing the number of processors many times over. So not just hexa-core, octa-core, but many-core CPUs with 256 or even 1,024 cores per processor. Such a processor naturally requires "massive parallelism" from the application in order to fully utilize the system.

From clock-oriented processor to multi-core processor

In order to make a processor faster, the clock frequency was the decisive factor for a long time in order to get more computing power out of a processor. Unfortunately, increasing the clock frequency also has disadvantages that lead to problems that are very difficult to solve.

  • higher power consumption (energy supply)
  • higher heat development (cooling)
  • more extensive cooling measures
  • louder computers through active cooling

Even a minimal increase in performance leads to a dramatically higher energy consumption. The associated power consumption is proportional to the clock frequency. Additional transistors and smaller semiconductor structures also increase the heat generation. The cooling requirements can no longer be met with conventional means. At the same time, the stability and service life of the processor are reduced.

Multiple cores are an alternative to ever higher clock rates. That means more power with less power consumption at the same time. The individual cores are clocked far less than a single computing core. Overall, however, the performance of the processor increases with each additional computing core.

Limited applicability of multi-core CPUs

The effective use of multiple cores requires that the software is written in such a way that it can process data in parallel on multiple cores at the same time. This means that a problem or the execution of a program must be broken down into several small subtasks so that these can be distributed over several cores.
Unfortunately, the usual applications are not designed for parallel execution and are usually not necessary.

Basically, you cannot "add" the computing power of several cores. That would presuppose that the existing arithmetic tasks can be parallelized.
As the number of cores in the processors increases, it becomes more and more difficult to call up the computing power in practice. Unfortunately, most application programs cannot benefit from the many cores. These include, in particular, the common office applications. Most desktop applications generally do not need parallel computing power over a long period of time. Only sometimes does one have the feeling that a multi-core processor causes an increase in speed. This is due to the fact that several programs can actually run in parallel. In the best case scenario, several power-hungry applications use different processor cores. For example, programs running in the background (services) or foreground. Several processor cores create a kind of power reserve through which applications always willingly accept input, even if any processes in the background require computing power.

Processors with more than 4 cores, however, have a hard time proving themselves in normal use. While a dual-core processor is still operated by the operating system, more than 2 cores require support from the applications, which can distribute their calculations over several processor cores themselves.

The advantages of the multi-core processors are a lower clock rate and less energy consumption per core and thus less effort for cooling. That means less power consumption while increasing performance at the same time.

The Amdahl Law

Amdahl's law, named after Gene Amdahl, is a model in computer science about the acceleration of programs through parallel execution. For example in multi-core processors. According to Amdahl, the increase in speed through parallelization is limited by the sequential part of the problem.
Usually the goal is to figure out a problem as quickly as possible. As a rule, a multi-core processor can calculate several problems in the shortest possible time. But only if the data actually has to be processed in parallel. If parallel execution is not required, parallel processing does not make sense either.


Parallel computing and multi-core processor technology are principles that have found their way into supercomputers since the early 1990s. These systems are multi-processor systems to this day. The processes used there have also found their way into CPUs for mass products.

In multi-core systems, many tasks are divided into threads. These threads are processed in parallel by several processor cores. For this, the operating system and the programs must be "threaded" or "multi-threaded" capable. With Hyper-Threading, Intel motivated the software industry to implement multi-threading applications early on.
Even if multi-threading applications are becoming more attractive due to the proliferation of computers with multi-core processors, software changes only very slowly. The problem: Multi-threading applications are much more complicated to program. There are also new and unfamiliar sources of error.

The parallelization of software poses a major challenge for programmers. One has to evaluate the parts of the applications that can benefit from multi-threading. Usually this is so minimal that it is not worth it and the development effort does not represent the result. To make matters worse, most applications do not require parallelization. Mainly because most of the time they wait for user input anyway.

Typical applications that have clear advantages through parallelization are image and video processing. But only if functions are carried out that enable parallel processing of data. Because multi-core processors do not bring any real advantage for many applications, the programmers do without optimizing the program code on multi-threading.

Automatic overclocking

The two large processor manufacturers Intel and AMD have long been promoting the switch to parallelizable programming. Because both are not able to permanently increase the clock rates of their processors to over 4 GHz. The processors can only be overclocked in special situations.
Therefore, despite multi-core processors, the computing power of a single core is still important. Intel and AMD have also recognized this and therefore built an automatic overclocking into their multi-core CPUs. If several cores have nothing to do, then they are switched off and one core is overclocked. And so long and so high that this one core gets too hot. Then this core is clocked down again.

Effects on the processor and computer architecture

In a multi-core architecture, several computing cores have to share the available resources. For example, RAM, interfaces, etc. The more cores a processor has, the more memory and more bandwidth to the memory is required. For this reason, the memory controller is no longer anchored in the chipset, but in the processor.

At the same time, there is the problem that the computing cores have to coordinate who is currently holding which data in the cache. As a rule, the computing cores work with a hierarchical cache structure. Each core gets its own L1 and L2 cache. The cores must share the L3 cache. A cache coherence protocol ensures that the processor cores do not get in each other's way when using the L3 cache.

Since the introduction of dual-core processors, the computer infrastructure has developed significantly, making multi-core processors more and more useful. Hard disks and SSDs are able to change the sequence of accesses via NCQ and thereby process data requests from several processor cores. Likewise PCI Express (PCIe), on which several data transfers from different computing cores can run in parallel.

SMP - Symmetrical Multiprocessing

With SMP, the operating system distributes the tasks to the individual cores and manages the memory and hardware resources. The calculations can be divided up among the available cores as required. But then synchronization between the cores and the memory is also necessary.

AMP - Asymmetric Multiprocessing

At AMP, each core is assigned its own software. In this way, applications and the operating system can be decoupled from one another on one system. Only the assigned processor core limits the performance. AMP is more efficient because less synchronization is required between the cores.

Typical applications for single-core processors

  • Word processing
  • Browser
  • e-mail
  • Instant messaging
  • Virus scanner
  • MP3 player
  • simple image editing (cutting, scaling, color correction)

Typical multi-threading applications for multi-core processors

Applications that can be easily parallelized are cutting 4K videos, generating computer graphics with ray tracing or compiling complex software projects. In these cases there may not be enough cores in the processor.

  • CAD
  • simulation
  • HD video
  • Compiler
  • 3D rendering
  • professional audio editing
  • professional photo editing
  • Video editing programs

Overall, multi-threading applications are special cases.

Multi-core standards and multi-threading support

The potential of a multi-core CPU is only useful if several running applications can employ several computing cores at the same time. So you need software that coordinates the whole thing and hardware that supports it. This includes distributing the calculations over several computing cores. This is also called "automatic parallelization".

Multi-threading, i.e. communication within the process, is well defined by POSIX. POSIX is a collection of open APIs specified by the IEEE for operating system services. The standard defines interfaces for thread and process control. POSIX is found in Linux, Unix, and a number of embedded operating systems.

Multi-threading is integrated in programming languages ​​such as Java or Ada. With OpenMP there is a software library with which developers can integrate parallel instructions in C and C ++ code.

The video interfaces Vulkan and Direct3D have ever better multi-threading support. And correspondingly adapted games then come onto the market, whereby gaming PCs benefit from more than 4 computing cores.

Basics of multi-core technology

Other related topics:

NEW 5th edition of the computer technology primer

The 5th edition of the Computertechnik-Primer has been completely revised and is available as a printed book, as an eBook in PDF format and in the Google Store, for Amazon Kindle and Apple iBook.

The Computertechnik-Fibel is a book about the basics of computer science, processor technology, semiconductor memory, interfaces, drives and important hardware components.

Order or download now

Everything you need to know about computer technology.

Computer technology primer

The computer technology primer is a book about the basics of computer technology, processor technology, semiconductor memory, interfaces, data storage, drives and important hardware components.

I want that!