CPU缓存的这种性能行为的说明

sud03r

我正在尝试重现此处介绍的结果每个程序员都应该了解的内存知识,尤其是下图所示的结果(本文中的p20-21)

缓存大小的影响

这基本上是针对不同工作大小的每个元素的周期图,图中的突然上升是在工作集大小超过缓存大小的点上。

为此,我在这里编写了此代码我看到所有数据都从内存中获取(通过每次使用clflush刷新缓存),所有数据大小的性能都相同(如预期),但是在使用缓存的情况下,我看到了完全相反的趋势


Working Set: 16 Kb took 72.62 ticks per access
Working Set: 32 Kb took 46.31 ticks per access
Working Set: 64 Kb took 28.19 ticks per access
Working Set: 128 Kb took 23.70 ticks per access
Working Set: 256 Kb took 20.92 ticks per access
Working Set: 512 Kb took 35.07 ticks per access
Working Set: 1024 Kb took 24.59 ticks per access
Working Set: 2048 Kb took 24.44 ticks per access
Working Set: 3072 Kb took 24.70 ticks per access                  
Working Set: 4096 Kb took 22.17 ticks per access
Working Set: 5120 Kb took 21.90 ticks per access
Working Set: 6144 Kb took 23.29 ticks per access

有人可以解释这种行为。我相信“预取”在这里做得很好,在管道开始时将数据带到缓存中。

如果是这样,我该如何重现图中所示的观察结果?我的缓存大小为L1(32 Kb),L2(256 Kb)和L3(3072 Kb)。

谢谢

乔·Z

这是我修改的代码。STEP每次都字节步进,从而更新内存。我选择STEP匹配处理器的高速缓存行大小(64字节)。重复填充循环REPEAT时间。它将单个字节写入每个高速缓存行。

#include <stdio.h>
#include <stdlib.h>
#include <stdint.h>

#define ARRAYSIZE(arr) (sizeof(arr)/sizeof(arr[0]))

#define STEP (64)
#define REPEAT (1000)

inline void
clflush(volatile void *p)
{
    asm volatile ("clflush (%0)" :: "r"(p));
}

inline uint64_t
rdtsc()
{
    unsigned long a, d;
    asm volatile ("cpuid; rdtsc" : "=a" (a), "=d" (d) : : "ebx", "ecx");
    return a | ((uint64_t)d << 32);
}

//volatile int i;


volatile unsigned char data[1 << 26]; // 64MB


void sequentialAccess(int bytes)
{
    for (int j = 0; j < REPEAT; j++)
        for (int i = 0; i < bytes; i += STEP)
            data[i] = i;

}

int rangeArr[] = {16, 32, 64, 128, 256, 512, 1024, 2048, 3072, 4096, 6144, 8192, 10240, 12*1024, 14*1024, 16*1024};

inline void test()
{
    for (int i = 0; i < ARRAYSIZE(rangeArr); i++)
    {
        uint64_t start, end;
        int kilobytes = rangeArr[i];
        start = rdtsc();
        sequentialAccess(kilobytes * 1024);
        end = rdtsc();
        double ticksPerAccess = 1.0 * (end - start) / (kilobytes * 1024 / STEP) / REPEAT;
        printf("%d kB took %lf ticks per access\n", kilobytes, ticksPerAccess);
    }
}

int
main(int ac, char **av)
{
    test();
    return 0;
}

在我的“ AMD Phenom(tm)II X4 965处理器”(来自的字符串/proc/cpuinfo)上,得到以下结果:

16 kB took 9.148758 ticks per access
32 kB took 8.855980 ticks per access
64 kB took 9.008148 ticks per access
128 kB took 17.197035 ticks per access
256 kB took 14.416313 ticks per access
512 kB took 15.845552 ticks per access
1024 kB took 21.394375 ticks per access
2048 kB took 21.379112 ticks per access
3072 kB took 21.399206 ticks per access
4096 kB took 21.630234 ticks per access
6144 kB took 23.907972 ticks per access
8192 kB took 46.306525 ticks per access
10240 kB took 49.292271 ticks per access
12288 kB took 49.575894 ticks per access
14336 kB took 49.758874 ticks per access
16384 kB took 49.660779 ticks per access

这看起来更像乌尔里希曲线。


编辑:仔细检查Ulrich Drepper的原始基准后,我意识到他正在构建测量范围之外的链接列表,然后测量追踪该链接列表的成本。它测量了一个称为“加载以使用延迟”的参数,这是从内存系统中提取的非常有用的参数。

我相信以下代码更接近于最初的理想。它还大大提高了迭代次数,以确保处理器上的省电功能不会启动。

在下面的代码中,您可以进行调整NPAD以匹配处理器的缓存行大小。您可以调整ACCESSES以控制基准循环迭代的次数。迭代的总数完全独立于数据集的大小。

码:

#include <stdio.h>
#include <stdlib.h>
#include <stdint.h>

#define NPAD (64 - sizeof(void *))
#define ACCESSES (1 << 28)


inline void
clflush(volatile void *p)
{
    asm volatile ("clflush (%0)" :: "r"(p));
}

inline uint64_t
rdtsc()
{
    unsigned long a, d;
    asm volatile ("cpuid; rdtsc" : "=a" (a), "=d" (d) : : "ebx", "ecx");
    return a | ((uint64_t)d << 32);
}


struct l
{
    l    *next;
    char pad[NPAD];
};


l array[ (1 << 26) / sizeof(l) ];


void init_sequential(int bytes)
{
    int elems = bytes / sizeof(l);

    for (int i = 1; i < elems; i++)
    {
        array[i - 1].next = &array[i];
    }

    array[elems - 1].next = &array[0];
}

void measure_baseline( int accesses )
{
    volatile l *ptr = &array[0];

    while (accesses-- > 0)
        ptr = ptr->next;
}


int rangeArr[] = {16, 32, 64, 128, 256, 512, 1024, 2048, 3072, 4096, 6144, 8192, 10240, 12*1024, 14*1024, 16*1024};

inline void test()
{
    for (int i = 0; i < sizeof(rangeArr) / sizeof(rangeArr[0]); i++)
    {
        uint64_t start, end;
        int kilobytes = rangeArr[i];

        init_sequential( kilobytes * 1024 );

        start = rdtsc();
        measure_baseline( ACCESSES );
        end = rdtsc();
        double ticksPerAccess = 1.0 * (end - start) / ACCESSES;
        printf("%d kB took %lf ticks per access\n", kilobytes, ticksPerAccess);
    }
}

int
main(int ac, char **av)
{
    test();
    return 0;
}

这是从我的处理器收集的数据:

16 kB took 3.062668 ticks per access
32 kB took 3.002012 ticks per access
64 kB took 3.001376 ticks per access
128 kB took 9.204764 ticks per access
256 kB took 9.284414 ticks per access
512 kB took 15.848642 ticks per access
1024 kB took 22.645605 ticks per access
2048 kB took 22.698062 ticks per access
3072 kB took 23.039498 ticks per access
4096 kB took 23.481494 ticks per access
6144 kB took 37.720315 ticks per access
8192 kB took 55.197783 ticks per access
10240 kB took 55.886692 ticks per access
12288 kB took 56.262199 ticks per access
14336 kB took 56.153559 ticks per access
16384 kB took 55.879395 ticks per access

这显示了3个周期的负载,以便将延迟用于L1D中的数据,这是我对该处理器(以及大多数主流高性能PC处理器)的期望。

本文收集自互联网,转载请注明来源。

如有侵权,请联系[email protected] 删除。

编辑于
0

我来说两句

0条评论
登录后参与评论

相关文章

来自分类Dev

谁能解释这种意外的V8 JavaScript性能行为?

来自分类Dev

SQL:奇怪的查询性能行为

来自分类Dev

Java BigDecimal奇怪的性能行为

来自分类Dev

SQL:奇怪的查询性能行为

来自分类Dev

使用OpenGL和GLSL的SSAO算法的奇怪性能行为

来自分类Dev

.NET字典插入的怪异性能行为

来自分类Dev

C ++缓存性能奇怪的行为

来自分类Dev

MariaDB根据CPU缓存的性能

来自分类Dev

如何防止这种“重播缓存”行为?

来自分类Dev

Java Applet在最新JRE(7.55+)上的奇怪性能行为

来自分类Dev

predsort / 3的可能行为

来自分类Dev

奇怪的窗口功能行为

来自分类Dev

Android拆分功能行为

来自分类Dev

我该如何创建这种性质的提交门户?

来自分类Dev

“ foreach”导致我的功能行为异常

来自分类Dev

前向脱位改变功能行为?

来自分类Dev

ZSH功能行为?(运行C ++)

来自分类Dev

带有匿名功能的功能行为

来自分类Dev

说明:jQuery缓存代码

来自分类Dev

C ++未执行的代码更改功能行为?

来自分类Dev

R中意外的套用功能行为

来自分类Dev

如何专门化(重载)单个类型的功能行为?

来自分类Dev

C ++未执行的代码更改功能行为?

来自分类Dev

更新到Swift 3后奇怪的转义功能行为

来自分类Dev

代码行为说明

来自分类Dev

Malloc行为说明

来自分类Dev

实时行为说明

来自分类Dev

衡量CPU性能时,缓存大小或内核数是否更重要?

来自分类Dev

从DRAM和L1缓存获取CPU峰值FP性能和延迟以及带宽