[−][src]Module core::arch::aarch64
Platform-specific intrinsics for the aarch64
platform.
See the module documentation for more details.
Structs
float32x2_t | [ Experimental ] [AArch64 ]ARM-specific 64-bit wide vector of two packed |
float32x4_t | [ Experimental ] [AArch64 ]ARM-specific 128-bit wide vector of four packed |
float64x1_t | [ Experimental ] [AArch64 ]ARM-specific 64-bit wide vector of one packed |
float64x2_t | [ Experimental ] [AArch64 ]ARM-specific 128-bit wide vector of two packed |
int16x2_t | [ Experimental ] [AArch64 ]ARM-specific 32-bit wide vector of two packed |
int16x4_t | [ Experimental ] [AArch64 ]ARM-specific 64-bit wide vector of four packed |
int16x8_t | [ Experimental ] [AArch64 ]ARM-specific 128-bit wide vector of eight packed |
int32x2_t | [ Experimental ] [AArch64 ]ARM-specific 64-bit wide vector of two packed |
int32x4_t | [ Experimental ] [AArch64 ]ARM-specific 128-bit wide vector of four packed |
int64x1_t | [ Experimental ] [AArch64 ]ARM-specific 64-bit wide vector of one packed |
int64x2_t | [ Experimental ] [AArch64 ]ARM-specific 128-bit wide vector of two packed |
int8x4_t | [ Experimental ] [AArch64 ]ARM-specific 32-bit wide vector of four packed |
int8x8_t | [ Experimental ] [AArch64 ]ARM-specific 64-bit wide vector of eight packed |
int8x16_t | [ Experimental ] [AArch64 ]ARM-specific 128-bit wide vector of sixteen packed |
int8x16x2_t | [ Experimental ] [AArch64 ]ARM-specific type containing two |
int8x16x3_t | [ Experimental ] [AArch64 ]ARM-specific type containing three |
int8x16x4_t | [ Experimental ] [AArch64 ]ARM-specific type containing four |
int8x8x2_t | [ Experimental ] [AArch64 ]ARM-specific type containing two |
int8x8x3_t | [ Experimental ] [AArch64 ]ARM-specific type containing three |
int8x8x4_t | [ Experimental ] [AArch64 ]ARM-specific type containing four |
poly16x4_t | [ Experimental ] [AArch64 ]ARM-specific 64-bit wide vector of four packed |
poly16x8_t | [ Experimental ] [AArch64 ]ARM-specific 128-bit wide vector of eight packed |
poly64x1_t | [ Experimental ] [AArch64 ]ARM-specific 64-bit wide vector of one packed |
poly64x2_t | [ Experimental ] [AArch64 ]ARM-specific 64-bit wide vector of two packed |
poly8x8_t | [ Experimental ] [AArch64 ]ARM-specific 64-bit wide polynomial vector of eight packed |
poly8x16_t | [ Experimental ] [AArch64 ]ARM-specific 128-bit wide vector of sixteen packed |
poly8x16x2_t | [ Experimental ] [AArch64 ]ARM-specific type containing two |
poly8x16x3_t | [ Experimental ] [AArch64 ]ARM-specific type containing three |
poly8x16x4_t | [ Experimental ] [AArch64 ]ARM-specific type containing four |
poly8x8x2_t | [ Experimental ] [AArch64 ]ARM-specific type containing two |
poly8x8x3_t | [ Experimental ] [AArch64 ]ARM-specific type containing three |
poly8x8x4_t | [ Experimental ] [AArch64 ]ARM-specific type containing four |
uint16x2_t | [ Experimental ] [AArch64 ]ARM-specific 32-bit wide vector of two packed |
uint16x4_t | [ Experimental ] [AArch64 ]ARM-specific 64-bit wide vector of four packed |
uint16x8_t | [ Experimental ] [AArch64 ]ARM-specific 128-bit wide vector of eight packed |
uint32x2_t | [ Experimental ] [AArch64 ]ARM-specific 64-bit wide vector of two packed |
uint32x4_t | [ Experimental ] [AArch64 ]ARM-specific 128-bit wide vector of four packed |
uint64x1_t | [ Experimental ] [AArch64 ]ARM-specific 64-bit wide vector of one packed |
uint64x2_t | [ Experimental ] [AArch64 ]ARM-specific 128-bit wide vector of two packed |
uint8x4_t | [ Experimental ] [AArch64 ]ARM-specific 32-bit wide vector of four packed |
uint8x8_t | [ Experimental ] [AArch64 ]ARM-specific 64-bit wide vector of eight packed |
uint8x16_t | [ Experimental ] [AArch64 ]ARM-specific 128-bit wide vector of sixteen packed |
uint8x16x2_t | [ Experimental ] [AArch64 ]ARM-specific type containing two |
uint8x16x3_t | [ Experimental ] [AArch64 ]ARM-specific type containing three |
uint8x16x4_t | [ Experimental ] [AArch64 ]ARM-specific type containing four |
uint8x8x2_t | [ Experimental ] [AArch64 ]ARM-specific type containing two |
uint8x8x3_t | [ Experimental ] [AArch64 ]ARM-specific type containing three |
uint8x8x4_t | [ Experimental ] [AArch64 ]ARM-specific type containing four |
Functions
__DMB⚠ | [ Experimental ] [AArch64 and ]mclass Data Memory Barrier |
__DSB⚠ | [ Experimental ] [AArch64 and ]mclass Data Synchronization Barrier |
__ISB⚠ | [ Experimental ] [AArch64 and ]mclass Instruction Synchronization Barrier |
__NOP⚠ | [ Experimental ] [AArch64 and ]mclass No Operation |
__SEV⚠ | [ Experimental ] [AArch64 and ]mclass Send Event |
__WFE⚠ | [ Experimental ] [AArch64 and ]mclass Wait For Event |
__WFI⚠ | [ Experimental ] [AArch64 and ]mclass Wait For Interrupt |
__disable_fault_irq⚠ | [ Experimental ] [AArch64 and ]mclass Disable FIQ |
__disable_irq⚠ | [ Experimental ] [AArch64 and ]mclass Disable IRQ Interrupts |
__enable_fault_irq⚠ | [ Experimental ] [AArch64 and ]mclass Enable FIQ |
__enable_irq⚠ | [ Experimental ] [AArch64 and ]mclass Enable IRQ Interrupts |
__get_APSR⚠ | [ Experimental ] [AArch64 and ]mclass Get APSR Register |
__get_BASEPRI⚠ | [ Experimental ] [AArch64 and ]mclass Get Base Priority |
__get_CONTROL⚠ | [ Experimental ] [AArch64 and ]mclass Get Control Register |
__get_FAULTMASK⚠ | [ Experimental ] [AArch64 and ]mclass Get Fault Mask |
__get_IPSR⚠ | [ Experimental ] [AArch64 and ]mclass Get IPSR Register |
__get_MSP⚠ | [ Experimental ] [AArch64 and ]mclass Get Main Stack Pointer |
__get_PRIMASK⚠ | [ Experimental ] [AArch64 and ]mclass Get Priority Mask |
__get_PSP⚠ | [ Experimental ] [AArch64 and ]mclass Get Process Stack Pointer |
__get_xPSR⚠ | [ Experimental ] [AArch64 and ]mclass Get xPSR Register |
__set_BASEPRI⚠ | [ Experimental ] [AArch64 and ]mclass Set Base Priority |
__set_BASEPRI_MAX⚠ | [ Experimental ] [AArch64 and ]mclass Set Base Priority with condition |
__set_CONTROL⚠ | [ Experimental ] [AArch64 and ]mclass Set Control Register |
__set_FAULTMASK⚠ | [ Experimental ] [AArch64 and ]mclass Set Fault Mask |
__set_MSP⚠ | [ Experimental ] [AArch64 and ]mclass Set Main Stack Pointer |
__set_PRIMASK⚠ | [ Experimental ] [AArch64 and ]mclass Set Priority Mask |
__set_PSP⚠ | [ Experimental ] [AArch64 and ]mclass Set Process Stack Pointer |
_cls_u32⚠ | [ Experimental ] [AArch64 ]Counts the leading most significant bits set. |
_cls_u64⚠ | [ Experimental ] [AArch64 ]Counts the leading most significant bits set. |
_clz_u8⚠ | [ Experimental ] [AArch64 and ]v7 Count Leading Zeros. |
_clz_u16⚠ | [ Experimental ] [AArch64 and ]v7 Count Leading Zeros. |
_clz_u32⚠ | [ Experimental ] [AArch64 and ]v7 Count Leading Zeros. |
_clz_u64⚠ | [ Experimental ] [AArch64 ]Count Leading Zeros. |
_rbit_u32⚠ | [ Experimental ] [AArch64 and ]v7 Reverse the bit order. |
_rbit_u64⚠ | [ Experimental ] [AArch64 ]Reverse the bit order. |
_rev_u16⚠ | [ Experimental ] [AArch64 ]Reverse the order of the bytes. |
_rev_u32⚠ | [ Experimental ] [AArch64 ]Reverse the order of the bytes. |
_rev_u64⚠ | [ Experimental ] [AArch64 ]Reverse the order of the bytes. |
qadd⚠ | [ Experimental ] [AArch64 ]Signed saturating addition |
qadd8⚠ | [ Experimental ] [AArch64 ]Saturating four 8-bit integer additions |
qadd16⚠ | [ Experimental ] [AArch64 ]Saturating two 16-bit integer additions |
qasx⚠ | [ Experimental ] [AArch64 ]Returns the 16-bit signed saturated equivalent of |
qsax⚠ | [ Experimental ] [AArch64 ]Returns the 16-bit signed saturated equivalent of |
qsub⚠ | [ Experimental ] [AArch64 ]Signed saturating subtraction |
qsub8⚠ | [ Experimental ] [AArch64 ]Saturating two 8-bit integer subtraction |
qsub16⚠ | [ Experimental ] [AArch64 ]Saturating two 16-bit integer subtraction |
sadd8⚠ | [ Experimental ] [AArch64 ]Returns the 8-bit signed saturated equivalent of |
sadd16⚠ | [ Experimental ] [AArch64 ]Returns the 16-bit signed saturated equivalent of |
sasx⚠ | [ Experimental ] [AArch64 ]Returns the 16-bit signed equivalent of |
sel⚠ | [ Experimental ] [AArch64 ]Select bytes from each operand according to APSR GE flags |
shadd8⚠ | [ Experimental ] [AArch64 ]Signed halving parallel byte-wise addition. |
shadd16⚠ | [ Experimental ] [AArch64 ]Signed halving parallel halfword-wise addition. |
shsub8⚠ | [ Experimental ] [AArch64 ]Signed halving parallel byte-wise subtraction. |
shsub16⚠ | [ Experimental ] [AArch64 ]Signed halving parallel halfword-wise subtraction. |
smlad⚠ | [ Experimental ] [AArch64 ]Dual 16-bit Signed Multiply with Addition of products and 32-bit accumulation. |
smlsd⚠ | [ Experimental ] [AArch64 ]Dual 16-bit Signed Multiply with Subtraction of products and 32-bit accumulation and overflow detection. |
smuad⚠ | [ Experimental ] [AArch64 ]Signed Dual Multiply Add. |
smuadx⚠ | [ Experimental ] [AArch64 ]Signed Dual Multiply Add Reversed. |
smusd⚠ | [ Experimental ] [AArch64 ]Signed Dual Multiply Subtract. |
smusdx⚠ | [ Experimental ] [AArch64 ]Signed Dual Multiply Subtract Reversed. |
usad8⚠ | [ Experimental ] [AArch64 ]Sum of 8-bit absolute differences. |
usad8a⚠ | [ Experimental ] [AArch64 ]Sum of 8-bit absolute differences and constant. |
vadd_f32⚠ | [ Experimental ] [neon and v7 and AArch64Vector add. |
vadd_f64⚠ | [ Experimental ] [AArch64 and ]neon Vector add. |
vadd_s8⚠ | [ Experimental ] [neon and v7 and AArch64Vector add. |
vadd_s16⚠ | [ Experimental ] [neon and v7 and AArch64Vector add. |
vadd_s32⚠ | [ Experimental ] [neon and v7 and AArch64Vector add. |
vadd_u8⚠ | [ Experimental ] [neon and v7 and AArch64Vector add. |
vadd_u16⚠ | [ Experimental ] [neon and v7 and AArch64Vector add. |
vadd_u32⚠ | [ Experimental ] [neon and v7 and AArch64Vector add. |
vaddd_s64⚠ | [ Experimental ] [AArch64 and ]neon Vector add. |
vaddd_u64⚠ | [ Experimental ] [AArch64 and ]neon Vector add. |
vaddl_s8⚠ | [ Experimental ] [neon and v7 and AArch64Vector long add. |
vaddl_s16⚠ | [ Experimental ] [neon and v7 and AArch64Vector long add. |
vaddl_s32⚠ | [ Experimental ] [neon and v7 and AArch64Vector long add. |
vaddl_u8⚠ | [ Experimental ] [neon and v7 and AArch64Vector long add. |
vaddl_u16⚠ | [ Experimental ] [neon and v7 and AArch64Vector long add. |
vaddl_u32⚠ | [ Experimental ] [neon and v7 and AArch64Vector long add. |
vaddq_f32⚠ | [ Experimental ] [neon and v7 and AArch64Vector add. |
vaddq_f64⚠ | [ Experimental ] [AArch64 and ]neon Vector add. |
vaddq_s8⚠ | [ Experimental ] [neon and v7 and AArch64Vector add. |
vaddq_s16⚠ | [ Experimental ] [neon and v7 and AArch64Vector add. |
vaddq_s32⚠ | [ Experimental ] [neon and v7 and AArch64Vector add. |
vaddq_s64⚠ | [ Experimental ] [neon and v7 and AArch64Vector add. |
vaddq_u8⚠ | [ Experimental ] [neon and v7 and AArch64Vector add. |
vaddq_u16⚠ | [ Experimental ] [neon and v7 and AArch64Vector add. |
vaddq_u32⚠ | [ Experimental ] [neon and v7 and AArch64Vector add. |
vaddq_u64⚠ | [ Experimental ] [neon and v7 and AArch64Vector add. |
vaesdq_u8⚠ | [ Experimental ] [AArch64 and ]crypto AES single round decryption. |
vaeseq_u8⚠ | [ Experimental ] [AArch64 and ]crypto AES single round encryption. |
vaesimcq_u8⚠ | [ Experimental ] [AArch64 and ]crypto AES inverse mix columns. |
vaesmcq_u8⚠ | [ Experimental ] [AArch64 and ]crypto AES mix columns. |
vcombine_f32⚠ | [ Experimental ] [AArch64 and ]neon Vector combine |
vcombine_f64⚠ | [ Experimental ] [AArch64 and ]neon Vector combine |
vcombine_p8⚠ | [ Experimental ] [AArch64 and ]neon Vector combine |
vcombine_p16⚠ | [ Experimental ] [AArch64 and ]neon Vector combine |
vcombine_p64⚠ | [ Experimental ] [AArch64 and ]neon Vector combine |
vcombine_s8⚠ | [ Experimental ] [AArch64 and ]neon Vector combine |
vcombine_s16⚠ | [ Experimental ] [AArch64 and ]neon Vector combine |
vcombine_s32⚠ | [ Experimental ] [AArch64 and ]neon Vector combine |
vcombine_s64⚠ | [ Experimental ] [AArch64 and ]neon Vector combine |
vcombine_u8⚠ | [ Experimental ] [AArch64 and ]neon Vector combine |
vcombine_u16⚠ | [ Experimental ] [AArch64 and ]neon Vector combine |
vcombine_u32⚠ | [ Experimental ] [AArch64 and ]neon Vector combine |
vcombine_u64⚠ | [ Experimental ] [AArch64 and ]neon Vector combine |
vmaxv_f32⚠ | [ Experimental ] [AArch64 and ]neon Horizontal vector max. |
vmaxv_s8⚠ | [ Experimental ] [AArch64 and ]neon Horizontal vector max. |
vmaxv_s16⚠ | [ Experimental ] [AArch64 and ]neon Horizontal vector max. |
vmaxv_s32⚠ | [ Experimental ] [AArch64 and ]neon Horizontal vector max. |
vmaxv_u8⚠ | [ Experimental ] [AArch64 and ]neon Horizontal vector max. |
vmaxv_u16⚠ | [ Experimental ] [AArch64 and ]neon Horizontal vector max. |
vmaxv_u32⚠ | [ Experimental ] [AArch64 and ]neon Horizontal vector max. |
vmaxvq_f32⚠ | [ Experimental ] [AArch64 and ]neon Horizontal vector max. |
vmaxvq_f64⚠ | [ Experimental ] [AArch64 and ]neon Horizontal vector max. |
vmaxvq_s8⚠ | [ Experimental ] [AArch64 and ]neon Horizontal vector max. |
vmaxvq_s16⚠ | [ Experimental ] [AArch64 and ]neon Horizontal vector max. |
vmaxvq_s32⚠ | [ Experimental ] [AArch64 and ]neon Horizontal vector max. |
vmaxvq_u8⚠ | [ Experimental ] [AArch64 and ]neon Horizontal vector max. |
vmaxvq_u16⚠ | [ Experimental ] [AArch64 and ]neon Horizontal vector max. |
vmaxvq_u32⚠ | [ Experimental ] [AArch64 and ]neon Horizontal vector max. |
vminv_f32⚠ | [ Experimental ] [AArch64 and ]neon Horizontal vector min. |
vminv_s8⚠ | [ Experimental ] [AArch64 and ]neon Horizontal vector min. |
vminv_s16⚠ | [ Experimental ] [AArch64 and ]neon Horizontal vector min. |
vminv_s32⚠ | [ Experimental ] [AArch64 and ]neon Horizontal vector min. |
vminv_u8⚠ | [ Experimental ] [AArch64 and ]neon Horizontal vector min. |
vminv_u16⚠ | [ Experimental ] [AArch64 and ]neon Horizontal vector min. |
vminv_u32⚠ | [ Experimental ] [AArch64 and ]neon Horizontal vector min. |
vminvq_f32⚠ | [ Experimental ] [AArch64 and ]neon Horizontal vector min. |
vminvq_f64⚠ | [ Experimental ] [AArch64 and ]neon Horizontal vector min. |
vminvq_s8⚠ | [ Experimental ] [AArch64 and ]neon Horizontal vector min. |
vminvq_s16⚠ | [ Experimental ] [AArch64 and ]neon Horizontal vector min. |
vminvq_s32⚠ | [ Experimental ] [AArch64 and ]neon Horizontal vector min. |
vminvq_u8⚠ | [ Experimental ] [AArch64 and ]neon Horizontal vector min. |
vminvq_u16⚠ | [ Experimental ] [AArch64 and ]neon Horizontal vector min. |
vminvq_u32⚠ | [ Experimental ] [AArch64 and ]neon Horizontal vector min. |
vmovl_s8⚠ | [ Experimental ] [neon and v7 and AArch64Vector long move. |
vmovl_s16⚠ | [ Experimental ] [neon and v7 and AArch64Vector long move. |
vmovl_s32⚠ | [ Experimental ] [neon and v7 and AArch64Vector long move. |
vmovl_u8⚠ | [ Experimental ] [neon and v7 and AArch64Vector long move. |
vmovl_u16⚠ | [ Experimental ] [neon and v7 and AArch64Vector long move. |
vmovl_u32⚠ | [ Experimental ] [neon and v7 and AArch64Vector long move. |
vmovn_s16⚠ | [ Experimental ] [neon and v7 and AArch64Vector narrow integer. |
vmovn_s32⚠ | [ Experimental ] [neon and v7 and AArch64Vector narrow integer. |
vmovn_s64⚠ | [ Experimental ] [neon and v7 and AArch64Vector narrow integer. |
vmovn_u16⚠ | [ Experimental ] [neon and v7 and AArch64Vector narrow integer. |
vmovn_u32⚠ | [ Experimental ] [neon and v7 and AArch64Vector narrow integer. |
vmovn_u64⚠ | [ Experimental ] [neon and v7 and AArch64Vector narrow integer. |
vpmax_f32⚠ | [ Experimental ] [neon and v7 and AArch64Folding maximum of adjacent pairs |
vpmax_s8⚠ | [ Experimental ] [neon and v7 and AArch64Folding maximum of adjacent pairs |
vpmax_s16⚠ | [ Experimental ] [neon and v7 and AArch64Folding maximum of adjacent pairs |
vpmax_s32⚠ | [ Experimental ] [neon and v7 and AArch64Folding maximum of adjacent pairs |
vpmax_u8⚠ | [ Experimental ] [neon and v7 and AArch64Folding maximum of adjacent pairs |
vpmax_u16⚠ | [ Experimental ] [neon and v7 and AArch64Folding maximum of adjacent pairs |
vpmax_u32⚠ | [ Experimental ] [neon and v7 and AArch64Folding maximum of adjacent pairs |
vpmaxq_f32⚠ | [ Experimental ] [AArch64 and ]neon Folding maximum of adjacent pairs |
vpmaxq_f64⚠ | [ Experimental ] [AArch64 and ]neon Folding maximum of adjacent pairs |
vpmaxq_s8⚠ | [ Experimental ] [AArch64 and ]neon Folding maximum of adjacent pairs |
vpmaxq_s16⚠ | [ Experimental ] [AArch64 and ]neon Folding maximum of adjacent pairs |
vpmaxq_s32⚠ | [ Experimental ] [AArch64 and ]neon Folding maximum of adjacent pairs |
vpmaxq_u8⚠ | [ Experimental ] [AArch64 and ]neon Folding maximum of adjacent pairs |
vpmaxq_u16⚠ | [ Experimental ] [AArch64 and ]neon Folding maximum of adjacent pairs |
vpmaxq_u32⚠ | [ Experimental ] [AArch64 and ]neon Folding maximum of adjacent pairs |
vpmin_f32⚠ | [ Experimental ] [neon and v7 and AArch64Folding minimum of adjacent pairs |
vpmin_s8⚠ | [ Experimental ] [neon and v7 and AArch64Folding minimum of adjacent pairs |
vpmin_s16⚠ | [ Experimental ] [neon and v7 and AArch64Folding minimum of adjacent pairs |
vpmin_s32⚠ | [ Experimental ] [neon and v7 and AArch64Folding minimum of adjacent pairs |
vpmin_u8⚠ | [ Experimental ] [neon and v7 and AArch64Folding minimum of adjacent pairs |
vpmin_u16⚠ | [ Experimental ] [neon and v7 and AArch64Folding minimum of adjacent pairs |
vpmin_u32⚠ | [ Experimental ] [neon and v7 and AArch64Folding minimum of adjacent pairs |
vpminq_f32⚠ | [ Experimental ] [AArch64 and ]neon Folding minimum of adjacent pairs |
vpminq_f64⚠ | [ Experimental ] [AArch64 and ]neon Folding minimum of adjacent pairs |
vpminq_s8⚠ | [ Experimental ] [AArch64 and ]neon Folding minimum of adjacent pairs |
vpminq_s16⚠ | [ Experimental ] [AArch64 and ]neon Folding minimum of adjacent pairs |
vpminq_s32⚠ | [ Experimental ] [AArch64 and ]neon Folding minimum of adjacent pairs |
vpminq_u8⚠ | [ Experimental ] [AArch64 and ]neon Folding minimum of adjacent pairs |
vpminq_u16⚠ | [ Experimental ] [AArch64 and ]neon Folding minimum of adjacent pairs |
vpminq_u32⚠ | [ Experimental ] [AArch64 and ]neon Folding minimum of adjacent pairs |
vqtbl1_p8⚠ | [ Experimental ] [AArch64 and ]neon Table look-up |
vqtbl1_s8⚠ | [ Experimental ] [AArch64 and ]neon Table look-up |
vqtbl1_u8⚠ | [ Experimental ] [AArch64 and ]neon Table look-up |
vqtbl1q_p8⚠ | [ Experimental ] [AArch64 and ]neon Table look-up |
vqtbl1q_s8⚠ | [ Experimental ] [AArch64 and ]neon Table look-up |
vqtbl1q_u8⚠ | [ Experimental ] [AArch64 and ]neon Table look-up |
vqtbl2_p8⚠ | [ Experimental ] [AArch64 and ]neon Table look-up |
vqtbl2_s8⚠ | [ Experimental ] [AArch64 and ]neon Table look-up |
vqtbl2_u8⚠ | [ Experimental ] [AArch64 and ]neon Table look-up |
vqtbl2q_p8⚠ | [ Experimental ] [AArch64 and ]neon Table look-up |
vqtbl2q_s8⚠ | [ Experimental ] [AArch64 and ]neon Table look-up |
vqtbl2q_u8⚠ | [ Experimental ] [AArch64 and ]neon Table look-up |
vqtbl3_p8⚠ | [ Experimental ] [AArch64 and ]neon Table look-up |
vqtbl3_s8⚠ | [ Experimental ] [AArch64 and ]neon Table look-up |
vqtbl3_u8⚠ | [ Experimental ] [AArch64 and ]neon Table look-up |
vqtbl3q_p8⚠ | [ Experimental ] [AArch64 and ]neon Table look-up |
vqtbl3q_s8⚠ | [ Experimental ] [AArch64 and ]neon Table look-up |
vqtbl3q_u8⚠ | [ Experimental ] [AArch64 and ]neon Table look-up |
vqtbl4_p8⚠ | [ Experimental ] [AArch64 and ]neon Table look-up |
vqtbl4_s8⚠ | [ Experimental ] [AArch64 and ]neon Table look-up |
vqtbl4_u8⚠ | [ Experimental ] [AArch64 and ]neon Table look-up |
vqtbl4q_p8⚠ | [ Experimental ] [AArch64 and ]neon Table look-up |
vqtbl4q_s8⚠ | [ Experimental ] [AArch64 and ]neon Table look-up |
vqtbl4q_u8⚠ | [ Experimental ] [AArch64 and ]neon Table look-up |
vqtbx1_p8⚠ | [ Experimental ] [AArch64 and ]neon Extended table look-up |
vqtbx1_s8⚠ | [ Experimental ] [AArch64 and ]neon Extended table look-up |
vqtbx1_u8⚠ | [ Experimental ] [AArch64 and ]neon Extended table look-up |
vqtbx1q_p8⚠ | [ Experimental ] [AArch64 and ]neon Extended table look-up |
vqtbx1q_s8⚠ | [ Experimental ] [AArch64 and ]neon Extended table look-up |
vqtbx1q_u8⚠ | [ Experimental ] [AArch64 and ]neon Extended table look-up |
vqtbx2_p8⚠ | [ Experimental ] [AArch64 and ]neon Extended table look-up |
vqtbx2_s8⚠ | [ Experimental ] [AArch64 and ]neon Extended table look-up |
vqtbx2_u8⚠ | [ Experimental ] [AArch64 and ]neon Extended table look-up |
vqtbx2q_p8⚠ | [ Experimental ] [AArch64 and ]neon Extended table look-up |
vqtbx2q_s8⚠ | [ Experimental ] [AArch64 and ]neon Extended table look-up |
vqtbx2q_u8⚠ | [ Experimental ] [AArch64 and ]neon Extended table look-up |
vqtbx3_p8⚠ | [ Experimental ] [AArch64 and ]neon Extended table look-up |
vqtbx3_s8⚠ | [ Experimental ] [AArch64 and ]neon Extended table look-up |
vqtbx3_u8⚠ | [ Experimental ] [AArch64 and ]neon Extended table look-up |
vqtbx3q_p8⚠ | [ Experimental ] [AArch64 and ]neon Extended table look-up |
vqtbx3q_s8⚠ | [ Experimental ] [AArch64 and ]neon Extended table look-up |
vqtbx3q_u8⚠ | [ Experimental ] [AArch64 and ]neon Extended table look-up |
vqtbx4_p8⚠ | [ Experimental ] [AArch64 and ]neon Extended table look-up |
vqtbx4_s8⚠ | [ Experimental ] [AArch64 and ]neon Extended table look-up |
vqtbx4_u8⚠ | [ Experimental ] [AArch64 and ]neon Extended table look-up |
vqtbx4q_p8⚠ | [ Experimental ] [AArch64 and ]neon Extended table look-up |
vqtbx4q_s8⚠ | [ Experimental ] [AArch64 and ]neon Extended table look-up |
vqtbx4q_u8⚠ | [ Experimental ] [AArch64 and ]neon Extended table look-up |
vrsqrte_f32⚠ | [ Experimental ] [AArch64 and ]neon Reciprocal square-root estimate. |
vsha1cq_u32⚠ | [ Experimental ] [AArch64 and ]crypto SHA1 hash update accelerator, choose. |
vsha1h_u32⚠ | [ Experimental ] [AArch64 and ]crypto SHA1 fixed rotate. |
vsha1mq_u32⚠ | [ Experimental ] [AArch64 and ]crypto SHA1 hash update accelerator, majority. |
vsha1pq_u32⚠ | [ Experimental ] [AArch64 and ]crypto SHA1 hash update accelerator, parity. |
vsha1su0q_u32⚠ | [ Experimental ] [AArch64 and ]crypto SHA1 schedule update accelerator, first part. |
vsha1su1q_u32⚠ | [ Experimental ] [AArch64 and ]crypto SHA1 schedule update accelerator, second part. |
vsha256h2q_u32⚠ | [ Experimental ] [AArch64 and ]crypto SHA256 hash update accelerator, upper part. |
vsha256hq_u32⚠ | [ Experimental ] [AArch64 and ]crypto SHA256 hash update accelerator. |
vsha256su0q_u32⚠ | [ Experimental ] [AArch64 and ]crypto SHA256 schedule update accelerator, first part. |
vsha256su1q_u32⚠ | [ Experimental ] [AArch64 and ]crypto SHA256 schedule update accelerator, second part. |
vtbl1_p8⚠ | [ Experimental ] [AArch64 and ]neon,v7 Table look-up |
vtbl1_s8⚠ | [ Experimental ] [AArch64 and ]neon,v7 Table look-up |
vtbl1_u8⚠ | [ Experimental ] [AArch64 and ]neon,v7 Table look-up |
vtbl2_p8⚠ | [ Experimental ] [AArch64 and ]neon,v7 Table look-up |
vtbl2_s8⚠ | [ Experimental ] [AArch64 and ]neon,v7 Table look-up |
vtbl2_u8⚠ | [ Experimental ] [AArch64 and ]neon,v7 Table look-up |
vtbl3_p8⚠ | [ Experimental ] [AArch64 and ]neon,v7 Table look-up |
vtbl3_s8⚠ | [ Experimental ] [AArch64 and ]neon,v7 Table look-up |
vtbl3_u8⚠ | [ Experimental ] [AArch64 and ]neon,v7 Table look-up |
vtbl4_p8⚠ | [ Experimental ] [AArch64 and ]neon,v7 Table look-up |
vtbl4_s8⚠ | [ Experimental ] [AArch64 and ]neon,v7 Table look-up |
vtbl4_u8⚠ | [ Experimental ] [AArch64 and ]neon,v7 Table look-up |
vtbx1_p8⚠ | [ Experimental ] [AArch64 and ]neon,v7 Extended table look-up |
vtbx1_s8⚠ | [ Experimental ] [AArch64 and ]neon,v7 Extended table look-up |
vtbx1_u8⚠ | [ Experimental ] [AArch64 and ]neon,v7 Extended table look-up |
vtbx2_p8⚠ | [ Experimental ] [AArch64 and ]neon,v7 Extended table look-up |
vtbx2_s8⚠ | [ Experimental ] [AArch64 and ]neon,v7 Extended table look-up |
vtbx2_u8⚠ | [ Experimental ] [AArch64 and ]neon,v7 Extended table look-up |
vtbx3_p8⚠ | [ Experimental ] [AArch64 and ]neon,v7 Extended table look-up |
vtbx3_s8⚠ | [ Experimental ] [AArch64 and ]neon,v7 Extended table look-up |
vtbx3_u8⚠ | [ Experimental ] [AArch64 and ]neon,v7 Extended table look-up |
vtbx4_p8⚠ | [ Experimental ] [AArch64 and ]neon,v7 Extended table look-up |
vtbx4_s8⚠ | [ Experimental ] [AArch64 and ]neon,v7 Extended table look-up |
vtbx4_u8⚠ | [ Experimental ] [AArch64 and ]neon,v7 Extended table look-up |