Skip to content
Navigation Menu
Toggle navigation
Sign in
In this repository
All GitHub Enterprise
↵
Jump to
↵
No suggested jump to results
In this repository
All GitHub Enterprise
↵
Jump to
↵
In this organization
All GitHub Enterprise
↵
Jump to
↵
In this repository
All GitHub Enterprise
↵
Jump to
↵
Sign in
Reseting focus
You signed in with another tab or window.
Reload
to refresh your session.
You signed out in another tab or window.
Reload
to refresh your session.
You switched accounts on another tab or window.
Reload
to refresh your session.
Dismiss alert
{{ message }}
mariux64
/
linux
Public
Notifications
You must be signed in to change notification settings
Fork
0
Star
0
Code
Issues
2
Pull requests
0
Actions
Projects
0
Wiki
Security
Insights
Additional navigation options
Code
Issues
Pull requests
Actions
Projects
Wiki
Security
Insights
Files
6025974
Breadcrumbs
linux
/
arch
/
arm
/
include
/
asm
/
cache.h
Blame
Blame
Latest commit
History
History
26 lines (22 loc) · 680 Bytes
Breadcrumbs
linux
/
arch
/
arm
/
include
/
asm
/
cache.h
Top
File metadata and controls
Code
Blame
26 lines (22 loc) · 680 Bytes
Raw
/* * arch/arm/include/asm/cache.h */ #ifndef __ASMARM_CACHE_H #define __ASMARM_CACHE_H #define L1_CACHE_SHIFT 5 #define L1_CACHE_BYTES (1 << L1_CACHE_SHIFT) /* * Memory returned by kmalloc() may be used for DMA, so we must make * sure that all such allocations are cache aligned. Otherwise, * unrelated code may cause parts of the buffer to be read into the * cache before the transfer is done, causing old data to be seen by * the CPU. */ #define ARCH_KMALLOC_MINALIGN L1_CACHE_BYTES /* * With EABI on ARMv5 and above we must have 64-bit aligned slab pointers. */ #if defined(CONFIG_AEABI) && (__LINUX_ARM_ARCH__ >= 5) #define ARCH_SLAB_MINALIGN 8 #endif #endif
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
You can’t perform that action at this time.