Skip to content

Commit fe3d197

Browse files
hansendcKAGA-KOKO
authored andcommitted
x86, mpx: On-demand kernel allocation of bounds tables
This is really the meat of the MPX patch set. If there is one patch to review in the entire series, this is the one. There is a new ABI here and this kernel code also interacts with userspace memory in a relatively unusual manner. (small FAQ below). Long Description: This patch adds two prctl() commands to provide enable or disable the management of bounds tables in kernel, including on-demand kernel allocation (See the patch "on-demand kernel allocation of bounds tables") and cleanup (See the patch "cleanup unused bound tables"). Applications do not strictly need the kernel to manage bounds tables and we expect some applications to use MPX without taking advantage of this kernel support. This means the kernel can not simply infer whether an application needs bounds table management from the MPX registers. The prctl() is an explicit signal from userspace. PR_MPX_ENABLE_MANAGEMENT is meant to be a signal from userspace to require kernel's help in managing bounds tables. PR_MPX_DISABLE_MANAGEMENT is the opposite, meaning that userspace don't want kernel's help any more. With PR_MPX_DISABLE_MANAGEMENT, the kernel won't allocate and free bounds tables even if the CPU supports MPX. PR_MPX_ENABLE_MANAGEMENT will fetch the base address of the bounds directory out of a userspace register (bndcfgu) and then cache it into a new field (->bd_addr) in the 'mm_struct'. PR_MPX_DISABLE_MANAGEMENT will set "bd_addr" to an invalid address. Using this scheme, we can use "bd_addr" to determine whether the management of bounds tables in kernel is enabled. Also, the only way to access that bndcfgu register is via an xsaves, which can be expensive. Caching "bd_addr" like this also helps reduce the cost of those xsaves when doing table cleanup at munmap() time. Unfortunately, we can not apply this optimization to #BR fault time because we need an xsave to get the value of BNDSTATUS. ==== Why does the hardware even have these Bounds Tables? ==== MPX only has 4 hardware registers for storing bounds information. If MPX-enabled code needs more than these 4 registers, it needs to spill them somewhere. It has two special instructions for this which allow the bounds to be moved between the bounds registers and some new "bounds tables". They are similar conceptually to a page fault and will be raised by the MPX hardware during both bounds violations or when the tables are not present. This patch handles those #BR exceptions for not-present tables by carving the space out of the normal processes address space (essentially calling the new mmap() interface indroduced earlier in this patch set.) and then pointing the bounds-directory over to it. The tables *need* to be accessed and controlled by userspace because the instructions for moving bounds in and out of them are extremely frequent. They potentially happen every time a register pointing to memory is dereferenced. Any direct kernel involvement (like a syscall) to access the tables would obviously destroy performance. ==== Why not do this in userspace? ==== This patch is obviously doing this allocation in the kernel. However, MPX does not strictly *require* anything in the kernel. It can theoretically be done completely from userspace. Here are a few ways this *could* be done. I don't think any of them are practical in the real-world, but here they are. Q: Can virtual space simply be reserved for the bounds tables so that we never have to allocate them? A: As noted earlier, these tables are *HUGE*. An X-GB virtual area needs 4*X GB of virtual space, plus 2GB for the bounds directory. If we were to preallocate them for the 128TB of user virtual address space, we would need to reserve 512TB+2GB, which is larger than the entire virtual address space today. This means they can not be reserved ahead of time. Also, a single process's pre-popualated bounds directory consumes 2GB of virtual *AND* physical memory. IOW, it's completely infeasible to prepopulate bounds directories. Q: Can we preallocate bounds table space at the same time memory is allocated which might contain pointers that might eventually need bounds tables? A: This would work if we could hook the site of each and every memory allocation syscall. This can be done for small, constrained applications. But, it isn't practical at a larger scale since a given app has no way of controlling how all the parts of the app might allocate memory (think libraries). The kernel is really the only place to intercept these calls. Q: Could a bounds fault be handed to userspace and the tables allocated there in a signal handler instead of in the kernel? A: (thanks to tglx) mmap() is not on the list of safe async handler functions and even if mmap() would work it still requires locking or nasty tricks to keep track of the allocation state there. Having ruled out all of the userspace-only approaches for managing bounds tables that we could think of, we create them on demand in the kernel. Based-on-patch-by: Qiaowei Ren <qiaowei.ren@intel.com> Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> Cc: linux-mm@kvack.org Cc: linux-mips@linux-mips.org Cc: Dave Hansen <dave@sr71.net> Link: http://lkml.kernel.org/r/20141114151829.AD4310DE@viggo.jf.intel.com Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
1 parent fcc7ffd commit fe3d197

File tree

11 files changed

+399
-6
lines changed

11 files changed

+399
-6
lines changed

arch/x86/include/asm/mmu_context.h

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -10,6 +10,7 @@
1010
#include <asm/pgalloc.h>
1111
#include <asm/tlbflush.h>
1212
#include <asm/paravirt.h>
13+
#include <asm/mpx.h>
1314
#ifndef CONFIG_PARAVIRT
1415
#include <asm-generic/mm_hooks.h>
1516

@@ -102,4 +103,10 @@ do { \
102103
} while (0)
103104
#endif
104105

106+
static inline void arch_bprm_mm_init(struct mm_struct *mm,
107+
struct vm_area_struct *vma)
108+
{
109+
mpx_mm_init(mm);
110+
}
111+
105112
#endif /* _ASM_X86_MMU_CONTEXT_H */

arch/x86/include/asm/mpx.h

Lines changed: 41 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -5,6 +5,14 @@
55
#include <asm/ptrace.h>
66
#include <asm/insn.h>
77

8+
/*
9+
* NULL is theoretically a valid place to put the bounds
10+
* directory, so point this at an invalid address.
11+
*/
12+
#define MPX_INVALID_BOUNDS_DIR ((void __user *)-1)
13+
#define MPX_BNDCFG_ENABLE_FLAG 0x1
14+
#define MPX_BD_ENTRY_VALID_FLAG 0x1
15+
816
#ifdef CONFIG_X86_64
917

1018
/* upper 28 bits [47:20] of the virtual address in 64-bit used to
@@ -18,6 +26,7 @@
1826
#define MPX_BT_ENTRY_OFFSET 17
1927
#define MPX_BT_ENTRY_SHIFT 5
2028
#define MPX_IGN_BITS 3
29+
#define MPX_BD_ENTRY_TAIL 3
2130

2231
#else
2332

@@ -26,23 +35,55 @@
2635
#define MPX_BT_ENTRY_OFFSET 10
2736
#define MPX_BT_ENTRY_SHIFT 4
2837
#define MPX_IGN_BITS 2
38+
#define MPX_BD_ENTRY_TAIL 2
2939

3040
#endif
3141

3242
#define MPX_BD_SIZE_BYTES (1UL<<(MPX_BD_ENTRY_OFFSET+MPX_BD_ENTRY_SHIFT))
3343
#define MPX_BT_SIZE_BYTES (1UL<<(MPX_BT_ENTRY_OFFSET+MPX_BT_ENTRY_SHIFT))
3444

45+
#define MPX_BNDSTA_TAIL 2
46+
#define MPX_BNDCFG_TAIL 12
47+
#define MPX_BNDSTA_ADDR_MASK (~((1UL<<MPX_BNDSTA_TAIL)-1))
48+
#define MPX_BNDCFG_ADDR_MASK (~((1UL<<MPX_BNDCFG_TAIL)-1))
49+
#define MPX_BT_ADDR_MASK (~((1UL<<MPX_BD_ENTRY_TAIL)-1))
50+
51+
#define MPX_BNDCFG_ADDR_MASK (~((1UL<<MPX_BNDCFG_TAIL)-1))
3552
#define MPX_BNDSTA_ERROR_CODE 0x3
3653

3754
#ifdef CONFIG_X86_INTEL_MPX
3855
siginfo_t *mpx_generate_siginfo(struct pt_regs *regs,
3956
struct xsave_struct *xsave_buf);
57+
int mpx_handle_bd_fault(struct xsave_struct *xsave_buf);
58+
static inline int kernel_managing_mpx_tables(struct mm_struct *mm)
59+
{
60+
return (mm->bd_addr != MPX_INVALID_BOUNDS_DIR);
61+
}
62+
static inline void mpx_mm_init(struct mm_struct *mm)
63+
{
64+
/*
65+
* NULL is theoretically a valid place to put the bounds
66+
* directory, so point this at an invalid address.
67+
*/
68+
mm->bd_addr = MPX_INVALID_BOUNDS_DIR;
69+
}
4070
#else
4171
static inline siginfo_t *mpx_generate_siginfo(struct pt_regs *regs,
4272
struct xsave_struct *xsave_buf)
4373
{
4474
return NULL;
4575
}
76+
static inline int mpx_handle_bd_fault(struct xsave_struct *xsave_buf)
77+
{
78+
return -EINVAL;
79+
}
80+
static inline int kernel_managing_mpx_tables(struct mm_struct *mm)
81+
{
82+
return 0;
83+
}
84+
static inline void mpx_mm_init(struct mm_struct *mm)
85+
{
86+
}
4687
#endif /* CONFIG_X86_INTEL_MPX */
4788

4889
#endif /* _ASM_X86_MPX_H */

arch/x86/include/asm/processor.h

Lines changed: 18 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -954,6 +954,24 @@ extern void start_thread(struct pt_regs *regs, unsigned long new_ip,
954954
extern int get_tsc_mode(unsigned long adr);
955955
extern int set_tsc_mode(unsigned int val);
956956

957+
/* Register/unregister a process' MPX related resource */
958+
#define MPX_ENABLE_MANAGEMENT(tsk) mpx_enable_management((tsk))
959+
#define MPX_DISABLE_MANAGEMENT(tsk) mpx_disable_management((tsk))
960+
961+
#ifdef CONFIG_X86_INTEL_MPX
962+
extern int mpx_enable_management(struct task_struct *tsk);
963+
extern int mpx_disable_management(struct task_struct *tsk);
964+
#else
965+
static inline int mpx_enable_management(struct task_struct *tsk)
966+
{
967+
return -EINVAL;
968+
}
969+
static inline int mpx_disable_management(struct task_struct *tsk)
970+
{
971+
return -EINVAL;
972+
}
973+
#endif /* CONFIG_X86_INTEL_MPX */
974+
957975
extern u16 amd_get_nb_id(int cpu);
958976

959977
static inline uint32_t hypervisor_cpuid_base(const char *sig, uint32_t leaves)

arch/x86/kernel/setup.c

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -960,6 +960,8 @@ void __init setup_arch(char **cmdline_p)
960960
init_mm.end_data = (unsigned long) _edata;
961961
init_mm.brk = _brk_end;
962962

963+
mpx_mm_init(&init_mm);
964+
963965
code_resource.start = __pa_symbol(_text);
964966
code_resource.end = __pa_symbol(_etext)-1;
965967
data_resource.start = __pa_symbol(_etext);

arch/x86/kernel/traps.c

Lines changed: 84 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -60,6 +60,7 @@
6060
#include <asm/fixmap.h>
6161
#include <asm/mach_traps.h>
6262
#include <asm/alternative.h>
63+
#include <asm/mpx.h>
6364

6465
#ifdef CONFIG_X86_64
6566
#include <asm/x86_init.h>
@@ -228,7 +229,6 @@ dotraplinkage void do_##name(struct pt_regs *regs, long error_code) \
228229

229230
DO_ERROR(X86_TRAP_DE, SIGFPE, "divide error", divide_error)
230231
DO_ERROR(X86_TRAP_OF, SIGSEGV, "overflow", overflow)
231-
DO_ERROR(X86_TRAP_BR, SIGSEGV, "bounds", bounds)
232232
DO_ERROR(X86_TRAP_UD, SIGILL, "invalid opcode", invalid_op)
233233
DO_ERROR(X86_TRAP_OLD_MF, SIGFPE, "coprocessor segment overrun",coprocessor_segment_overrun)
234234
DO_ERROR(X86_TRAP_TS, SIGSEGV, "invalid TSS", invalid_TSS)
@@ -278,6 +278,89 @@ dotraplinkage void do_double_fault(struct pt_regs *regs, long error_code)
278278
}
279279
#endif
280280

281+
dotraplinkage void do_bounds(struct pt_regs *regs, long error_code)
282+
{
283+
struct task_struct *tsk = current;
284+
struct xsave_struct *xsave_buf;
285+
enum ctx_state prev_state;
286+
struct bndcsr *bndcsr;
287+
siginfo_t *info;
288+
289+
prev_state = exception_enter();
290+
if (notify_die(DIE_TRAP, "bounds", regs, error_code,
291+
X86_TRAP_BR, SIGSEGV) == NOTIFY_STOP)
292+
goto exit;
293+
conditional_sti(regs);
294+
295+
if (!user_mode(regs))
296+
die("bounds", regs, error_code);
297+
298+
if (!cpu_feature_enabled(X86_FEATURE_MPX)) {
299+
/* The exception is not from Intel MPX */
300+
goto exit_trap;
301+
}
302+
303+
/*
304+
* We need to look at BNDSTATUS to resolve this exception.
305+
* It is not directly accessible, though, so we need to
306+
* do an xsave and then pull it out of the xsave buffer.
307+
*/
308+
fpu_save_init(&tsk->thread.fpu);
309+
xsave_buf = &(tsk->thread.fpu.state->xsave);
310+
bndcsr = get_xsave_addr(xsave_buf, XSTATE_BNDCSR);
311+
if (!bndcsr)
312+
goto exit_trap;
313+
314+
/*
315+
* The error code field of the BNDSTATUS register communicates status
316+
* information of a bound range exception #BR or operation involving
317+
* bound directory.
318+
*/
319+
switch (bndcsr->bndstatus & MPX_BNDSTA_ERROR_CODE) {
320+
case 2: /* Bound directory has invalid entry. */
321+
if (mpx_handle_bd_fault(xsave_buf))
322+
goto exit_trap;
323+
break; /* Success, it was handled */
324+
case 1: /* Bound violation. */
325+
info = mpx_generate_siginfo(regs, xsave_buf);
326+
if (PTR_ERR(info)) {
327+
/*
328+
* We failed to decode the MPX instruction. Act as if
329+
* the exception was not caused by MPX.
330+
*/
331+
goto exit_trap;
332+
}
333+
/*
334+
* Success, we decoded the instruction and retrieved
335+
* an 'info' containing the address being accessed
336+
* which caused the exception. This information
337+
* allows and application to possibly handle the
338+
* #BR exception itself.
339+
*/
340+
do_trap(X86_TRAP_BR, SIGSEGV, "bounds", regs, error_code, info);
341+
kfree(info);
342+
break;
343+
case 0: /* No exception caused by Intel MPX operations. */
344+
goto exit_trap;
345+
default:
346+
die("bounds", regs, error_code);
347+
}
348+
349+
exit:
350+
exception_exit(prev_state);
351+
return;
352+
exit_trap:
353+
/*
354+
* This path out is for all the cases where we could not
355+
* handle the exception in some way (like allocating a
356+
* table or telling userspace about it. We will also end
357+
* up here if the kernel has MPX turned off at compile
358+
* time..
359+
*/
360+
do_trap(X86_TRAP_BR, SIGSEGV, "bounds", regs, error_code, NULL);
361+
exception_exit(prev_state);
362+
}
363+
281364
dotraplinkage void
282365
do_general_protection(struct pt_regs *regs, long error_code)
283366
{

0 commit comments

Comments
 (0)