portable_atomic/
lib.rs

1// SPDX-License-Identifier: Apache-2.0 OR MIT
2
3/*!
4<!-- Note: Document from sync-markdown-to-rustdoc:start through sync-markdown-to-rustdoc:end
5     is synchronized from README.md. Any changes to that range are not preserved. -->
6<!-- tidy:sync-markdown-to-rustdoc:start -->
7
8Portable atomic types including support for 128-bit atomics, atomic float, etc.
9
10- Provide all atomic integer types (`Atomic{I,U}{8,16,32,64}`) for all targets that can use atomic CAS. (i.e., all targets that can use `std`, and most no-std targets)
11- Provide `AtomicI128` and `AtomicU128`.
12- Provide `AtomicF32` and `AtomicF64`. ([optional, requires the `float` feature](#optional-features-float))
13- Provide `AtomicF16` and `AtomicF128` for [unstable `f16` and `f128`](https://github.com/rust-lang/rust/issues/116909). ([optional, requires the `float` feature and unstable cfgs](#optional-features-float))
14- Provide atomic load/store for targets where atomic is not available at all in the standard library. (RISC-V without A-extension, MSP430, AVR)
15- Provide atomic CAS for targets where atomic CAS is not available in the standard library. (thumbv6m, pre-v6 Arm, RISC-V without A-extension, MSP430, AVR, Xtensa, etc.) (always enabled for MSP430 and AVR, [optional](#optional-features-critical-section) otherwise)
16- Provide stable equivalents of the standard library's atomic types' unstable APIs, such as [`AtomicPtr::fetch_*`](https://github.com/rust-lang/rust/issues/99108).
17- Make features that require newer compilers, such as [`fetch_{max,min}`](https://doc.rust-lang.org/std/sync/atomic/struct.AtomicUsize.html#method.fetch_max), [`fetch_update`](https://doc.rust-lang.org/std/sync/atomic/struct.AtomicUsize.html#method.fetch_update), [`as_ptr`](https://doc.rust-lang.org/std/sync/atomic/struct.AtomicUsize.html#method.as_ptr), [`from_ptr`](https://doc.rust-lang.org/std/sync/atomic/struct.AtomicUsize.html#method.from_ptr), [`AtomicBool::fetch_not`](https://doc.rust-lang.org/std/sync/atomic/struct.AtomicBool.html#method.fetch_not) and [stronger CAS failure ordering](https://github.com/rust-lang/rust/pull/98383) available on Rust 1.34+.
18- Provide workaround for bugs in the standard library's atomic-related APIs, such as [rust-lang/rust#100650], `fence`/`compiler_fence` on MSP430 that cause LLVM error, etc.
19
20<!-- TODO:
21- mention Atomic{I,U}*::fetch_neg, Atomic{I*,U*,Ptr}::bit_*, etc.
22- mention optimizations not available in the standard library's equivalents
23-->
24
25portable-atomic version of `std::sync::Arc` is provided by the [portable-atomic-util](https://github.com/taiki-e/portable-atomic/tree/HEAD/portable-atomic-util) crate.
26
27## Usage
28
29Add this to your `Cargo.toml`:
30
31```toml
32[dependencies]
33portable-atomic = "1"
34```
35
36The default features are mainly for users who use atomics larger than the pointer width.
37If you don't need them, disabling the default features may reduce code size and compile time slightly.
38
39```toml
40[dependencies]
41portable-atomic = { version = "1", default-features = false }
42```
43
44If your crate supports no-std environment and requires atomic CAS, enabling the `require-cas` feature will allow the `portable-atomic` to display a [helpful error message](https://github.com/taiki-e/portable-atomic/pull/100) to users on targets requiring additional action on the user side to provide atomic CAS.
45
46```toml
47[dependencies]
48portable-atomic = { version = "1.3", default-features = false, features = ["require-cas"] }
49```
50
51## 128-bit atomics support
52
53Native 128-bit atomic operations are available on x86_64 (Rust 1.59+), AArch64 (Rust 1.59+), riscv64 (Rust 1.59+), Arm64EC (Rust 1.84+), s390x (Rust 1.84+), and powerpc64 (nightly only), otherwise the fallback implementation is used.
54
55On x86_64, even if `cmpxchg16b` is not available at compile-time (Note: `cmpxchg16b` target feature is enabled by default only on Apple, Windows (except Windows 7), and Fuchsia targets), run-time detection checks whether `cmpxchg16b` is available. If `cmpxchg16b` is not available at either compile-time or run-time detection, the fallback implementation is used. See also [`portable_atomic_no_outline_atomics`](#optional-cfg-no-outline-atomics) cfg.
56
57They are usually implemented using inline assembly, and when using Miri or ThreadSanitizer that do not support inline assembly, core intrinsics are used instead of inline assembly if possible.
58
59See the [`atomic128` module's readme](https://github.com/taiki-e/portable-atomic/blob/HEAD/src/imp/atomic128/README.md) for details.
60
61## Optional features
62
63- **`fallback`** *(enabled by default)*<br>
64  Enable fallback implementations.
65
66  Disabling this allows only atomic types for which the platform natively supports atomic operations.
67
68- <a name="optional-features-float"></a>**`float`**<br>
69  Provide `AtomicF{32,64}`.
70
71  - When unstable `--cfg portable_atomic_unstable_f16` is also enabled, `AtomicF16` for [unstable `f16`](https://github.com/rust-lang/rust/issues/116909) is also provided.
72  - When unstable `--cfg portable_atomic_unstable_f128` is also enabled, `AtomicF128` for [unstable `f128`](https://github.com/rust-lang/rust/issues/116909) is also provided.
73
74  Note:
75  - Atomic float's `fetch_{add,sub,min,max}` are usually implemented using CAS loops, which can be slower than equivalent operations of atomic integers. As an exception, AArch64 with FEAT_LSFE and GPU targets have atomic float instructions and we use them on AArch64 when `lsfe` target feature is available at compile-time. We [plan to use atomic float instructions for GPU targets as well in the future.](https://github.com/taiki-e/portable-atomic/issues/34))
76  - Unstable cfgs are outside of the normal semver guarantees and minor or patch versions of portable-atomic may make breaking changes to them at any time.
77
78- **`std`**<br>
79  Use `std`.
80
81- <a name="optional-features-require-cas"></a>**`require-cas`**<br>
82  Emit compile error if atomic CAS is not available. See [Usage](#usage) section and [#100](https://github.com/taiki-e/portable-atomic/pull/100) for more.
83
84- <a name="optional-features-serde"></a>**`serde`**<br>
85  Implement `serde::{Serialize,Deserialize}` for atomic types.
86
87  Note:
88  - The MSRV when this feature is enabled depends on the MSRV of [serde].
89
90- <a name="optional-features-critical-section"></a>**`critical-section`**<br>
91  When this feature is enabled, this crate uses [critical-section] to provide atomic CAS for targets where
92  it is not natively available. When enabling it, you should provide a suitable critical section implementation
93  for the current target, see the [critical-section] documentation for details on how to do so.
94
95  `critical-section` support is useful to get atomic CAS when the [`unsafe-assume-single-core` feature](#optional-features-unsafe-assume-single-core) can't be used,
96  such as multi-core targets, unprivileged code running under some RTOS, or environments where disabling interrupts
97  needs extra care due to e.g. real-time requirements.
98
99  Note that with the `critical-section` feature, critical sections are taken for all atomic operations, while with
100  [`unsafe-assume-single-core` feature](#optional-features-unsafe-assume-single-core) some operations don't require disabling interrupts (loads and stores, but
101  additionally on MSP430 `add`, `sub`, `and`, `or`, `xor`, `not`). Therefore, for better performance, if
102  all the `critical-section` implementation for your target does is disable interrupts, prefer using
103  `unsafe-assume-single-core` feature instead.
104
105  Note:
106  - The MSRV when this feature is enabled depends on the MSRV of [critical-section].
107  - It is usually *not* recommended to always enable this feature in dependencies of the library.
108
109    Enabling this feature will prevent the end user from having the chance to take advantage of other (potentially) efficient implementations ([Implementations provided by `unsafe-assume-single-core` feature, default implementations on MSP430 and AVR](#optional-features-unsafe-assume-single-core), implementation proposed in [#60], etc. Other systems may also be supported in the future).
110
111    The recommended approach for libraries is to leave it up to the end user whether or not to enable this feature. (However, it may make sense to enable this feature by default for libraries specific to a platform where other implementations are known not to work.)
112
113    As an example, the end-user's `Cargo.toml` that uses a crate that provides a critical-section implementation and a crate that depends on portable-atomic as an option would be expected to look like this:
114
115    ```toml
116    [dependencies]
117    portable-atomic = { version = "1", default-features = false, features = ["critical-section"] }
118    crate-provides-critical-section-impl = "..."
119    crate-uses-portable-atomic-as-feature = { version = "...", features = ["portable-atomic"] }
120    ```
121
122- <a name="optional-features-unsafe-assume-single-core"></a>**`unsafe-assume-single-core`**<br>
123  Assume that the target is single-core.
124  When this feature is enabled, this crate provides atomic CAS for targets where atomic CAS is not available in the standard library by disabling interrupts.
125
126  This feature is `unsafe`, and note the following safety requirements:
127  - Enabling this feature for multi-core systems is always **unsound**.
128  - This uses privileged instructions to disable interrupts, so it usually doesn't work on unprivileged mode.
129    Enabling this feature in an environment where privileged instructions are not available, or if the instructions used are not sufficient to disable interrupts in the system, it is also usually considered **unsound**, although the details are system-dependent.
130
131    The following are known cases:
132    - On pre-v6 Arm, this disables only IRQs by default. For many systems (e.g., GBA) this is enough. If the system need to disable both IRQs and FIQs, you need to enable the `disable-fiq` feature together.
133    - On RISC-V without A-extension, this generates code for machine-mode (M-mode) by default. If you enable the `s-mode` together, this generates code for supervisor-mode (S-mode). In particular, `qemu-system-riscv*` uses [OpenSBI](https://github.com/riscv-software-src/opensbi) as the default firmware.
134
135    See also the [`interrupt` module's readme](https://github.com/taiki-e/portable-atomic/blob/HEAD/src/imp/interrupt/README.md).
136
137  Consider using the [`critical-section` feature](#optional-features-critical-section) for systems that cannot use this feature.
138
139  It is **very strongly discouraged** to enable this feature in libraries that depend on `portable-atomic`. The recommended approach for libraries is to leave it up to the end user whether or not to enable this feature. (However, it may make sense to enable this feature by default for libraries specific to a platform where it is guaranteed to always be sound, for example in a hardware abstraction layer targeting a single-core chip.)
140
141  Armv6-M (thumbv6m), pre-v6 Arm (e.g., thumbv4t, thumbv5te), RISC-V without A-extension, and Xtensa are currently supported.
142
143  Since all MSP430 and AVR are single-core, we always provide atomic CAS for them without this feature.
144
145  Enabling this feature for targets that have atomic CAS will result in a compile error.
146
147  Feel free to submit an issue if your target is not supported yet.
148
149## Optional cfg
150
151One of the ways to enable cfg is to set [rustflags in the cargo config](https://doc.rust-lang.org/cargo/reference/config.html#targettriplerustflags):
152
153```toml
154# .cargo/config.toml
155[target.<target>]
156rustflags = ["--cfg", "portable_atomic_no_outline_atomics"]
157```
158
159Or set environment variable:
160
161```sh
162RUSTFLAGS="--cfg portable_atomic_no_outline_atomics" cargo ...
163```
164
165- <a name="optional-cfg-unsafe-assume-single-core"></a>**`--cfg portable_atomic_unsafe_assume_single_core`**<br>
166  Since 1.4.0, this cfg is an alias of [`unsafe-assume-single-core` feature](#optional-features-unsafe-assume-single-core).
167
168  Originally, we were providing these as cfgs instead of features, but based on a strong request from the embedded ecosystem, we have agreed to provide them as features as well. See [#94](https://github.com/taiki-e/portable-atomic/pull/94) for more.
169
170- <a name="optional-cfg-no-outline-atomics"></a>**`--cfg portable_atomic_no_outline_atomics`**<br>
171  Disable dynamic dispatching by run-time CPU feature detection.
172
173  If dynamic dispatching by run-time CPU feature detection is enabled, it allows maintaining support for older CPUs while using features that are not supported on older CPUs, such as CMPXCHG16B (x86_64) and FEAT_LSE/FEAT_LSE2 (AArch64).
174
175  Note:
176  - Dynamic detection is currently only supported in x86_64, AArch64, Arm, RISC-V, Arm64EC, and powerpc64, otherwise it works the same as when this cfg is set.
177  - If the required target features are enabled at compile-time, the atomic operations are inlined.
178  - This is compatible with no-std (as with all features except `std`).
179  - On some targets, run-time detection is disabled by default mainly for compatibility with incomplete build environments or support for it is experimental, and can be enabled by `--cfg portable_atomic_outline_atomics`. (When both cfg are enabled, `*_no_*` cfg is preferred.)
180  - Some AArch64 targets enable LLVM's `outline-atomics` target feature by default, so if you set this cfg, you may want to disable that as well. (portable-atomic's outline-atomics does not depend on the compiler-rt symbols, so even if you need to disable LLVM's outline-atomics, you may not need to disable portable-atomic's outline-atomics.)
181
182  See also the [`atomic128` module's readme](https://github.com/taiki-e/portable-atomic/blob/HEAD/src/imp/atomic128/README.md).
183
184## Related Projects
185
186- [atomic-maybe-uninit]: Atomic operations on potentially uninitialized integers.
187- [atomic-memcpy]: Byte-wise atomic memcpy.
188
189[#60]: https://github.com/taiki-e/portable-atomic/issues/60
190[atomic-maybe-uninit]: https://github.com/taiki-e/atomic-maybe-uninit
191[atomic-memcpy]: https://github.com/taiki-e/atomic-memcpy
192[critical-section]: https://github.com/rust-embedded/critical-section
193[rust-lang/rust#100650]: https://github.com/rust-lang/rust/issues/100650
194[serde]: https://github.com/serde-rs/serde
195
196<!-- tidy:sync-markdown-to-rustdoc:end -->
197*/
198
199#![no_std]
200#![doc(test(
201    no_crate_inject,
202    attr(
203        deny(warnings, rust_2018_idioms, single_use_lifetimes),
204        allow(dead_code, unused_variables)
205    )
206))]
207#![cfg_attr(not(portable_atomic_no_unsafe_op_in_unsafe_fn), warn(unsafe_op_in_unsafe_fn))] // unsafe_op_in_unsafe_fn requires Rust 1.52
208#![cfg_attr(portable_atomic_no_unsafe_op_in_unsafe_fn, allow(unused_unsafe))]
209#![warn(
210    // Lints that may help when writing public library.
211    missing_debug_implementations,
212    // missing_docs,
213    clippy::alloc_instead_of_core,
214    clippy::exhaustive_enums,
215    clippy::exhaustive_structs,
216    clippy::impl_trait_in_params,
217    clippy::missing_inline_in_public_items,
218    clippy::std_instead_of_alloc,
219    clippy::std_instead_of_core,
220    // Code outside of cfg(feature = "float") shouldn't use float.
221    clippy::float_arithmetic,
222)]
223#![cfg_attr(not(portable_atomic_no_asm), warn(missing_docs))] // module-level #![allow(missing_docs)] doesn't work for macros on old rustc
224#![cfg_attr(portable_atomic_no_strict_provenance, allow(unstable_name_collisions))]
225#![allow(clippy::inline_always, clippy::used_underscore_items)]
226// asm_experimental_arch
227// AVR, MSP430, and Xtensa are tier 3 platforms and require nightly anyway.
228// On tier 2 platforms (powerpc64), we use cfg set by build script to
229// determine whether this feature is available or not.
230#![cfg_attr(
231    all(
232        not(portable_atomic_no_asm),
233        any(
234            target_arch = "avr",
235            target_arch = "msp430",
236            all(target_arch = "xtensa", portable_atomic_unsafe_assume_single_core),
237            all(target_arch = "powerpc64", portable_atomic_unstable_asm_experimental_arch),
238        ),
239    ),
240    feature(asm_experimental_arch)
241)]
242// f16/f128
243// cfg is unstable and explicitly enabled by the user
244#![cfg_attr(portable_atomic_unstable_f16, feature(f16))]
245#![cfg_attr(portable_atomic_unstable_f128, feature(f128))]
246// Old nightly only
247// These features are already stabilized or have already been removed from compilers,
248// and can safely be enabled for old nightly as long as version detection works.
249// - cfg(target_has_atomic)
250// - asm! on AArch64, Arm, RISC-V, x86, x86_64, Arm64EC, s390x
251// - llvm_asm! on AVR (tier 3) and MSP430 (tier 3)
252// - #[instruction_set] on non-Linux/Android pre-v6 Arm (tier 3)
253// This also helps us test that our assembly code works with the minimum external
254// LLVM version of the first rustc version that inline assembly stabilized.
255#![cfg_attr(portable_atomic_unstable_cfg_target_has_atomic, feature(cfg_target_has_atomic))]
256#![cfg_attr(
257    all(
258        portable_atomic_unstable_asm,
259        any(
260            target_arch = "aarch64",
261            target_arch = "arm",
262            target_arch = "riscv32",
263            target_arch = "riscv64",
264            target_arch = "x86",
265            target_arch = "x86_64",
266        ),
267    ),
268    feature(asm)
269)]
270#![cfg_attr(
271    all(
272        portable_atomic_unstable_asm_experimental_arch,
273        any(target_arch = "arm64ec", target_arch = "s390x"),
274    ),
275    feature(asm_experimental_arch)
276)]
277#![cfg_attr(
278    all(any(target_arch = "avr", target_arch = "msp430"), portable_atomic_no_asm),
279    feature(llvm_asm)
280)]
281#![cfg_attr(
282    all(
283        target_arch = "arm",
284        portable_atomic_unstable_isa_attribute,
285        any(test, portable_atomic_unsafe_assume_single_core),
286        not(any(target_feature = "v6", portable_atomic_target_feature = "v6")),
287        not(target_has_atomic = "ptr"),
288    ),
289    feature(isa_attribute)
290)]
291// Miri and/or ThreadSanitizer only
292// They do not support inline assembly, so we need to use unstable features instead.
293// Since they require nightly compilers anyway, we can use the unstable features.
294// This is not an ideal situation, but it is still better than always using lock-based
295// fallback and causing memory ordering problems to be missed by these checkers.
296#![cfg_attr(
297    all(
298        any(
299            target_arch = "aarch64",
300            target_arch = "arm64ec",
301            target_arch = "powerpc64",
302            target_arch = "s390x",
303        ),
304        any(miri, portable_atomic_sanitize_thread),
305    ),
306    allow(internal_features)
307)]
308#![cfg_attr(
309    all(
310        any(
311            target_arch = "aarch64",
312            target_arch = "arm64ec",
313            target_arch = "powerpc64",
314            target_arch = "s390x",
315        ),
316        any(miri, portable_atomic_sanitize_thread),
317    ),
318    feature(core_intrinsics)
319)]
320// docs.rs only (cfg is enabled by docs.rs, not build script)
321#![cfg_attr(docsrs, feature(doc_cfg))]
322#![cfg_attr(
323    all(
324        portable_atomic_no_atomic_load_store,
325        not(any(
326            target_arch = "avr",
327            target_arch = "bpf",
328            target_arch = "msp430",
329            target_arch = "riscv32",
330            target_arch = "riscv64",
331            feature = "critical-section",
332        )),
333    ),
334    allow(unused_imports, unused_macros, clippy::unused_trait_names)
335)]
336
337// There are currently no 128-bit or higher builtin targets.
338// (Although some of our generic code is written with the future
339// addition of 128-bit targets in mind.)
340// Note that Rust (and C99) pointers must be at least 16-bit (i.e., 8-bit targets are impossible): https://github.com/rust-lang/rust/pull/49305
341#[cfg(not(any(
342    target_pointer_width = "16",
343    target_pointer_width = "32",
344    target_pointer_width = "64",
345)))]
346compile_error!(
347    "portable-atomic currently only supports targets with {16,32,64}-bit pointer width; \
348     if you need support for others, \
349     please submit an issue at <https://github.com/taiki-e/portable-atomic>"
350);
351
352#[cfg(portable_atomic_unsafe_assume_single_core)]
353#[cfg_attr(portable_atomic_no_cfg_target_has_atomic, cfg(not(portable_atomic_no_atomic_cas)))]
354#[cfg_attr(not(portable_atomic_no_cfg_target_has_atomic), cfg(target_has_atomic = "ptr"))]
355compile_error!(
356    "`portable_atomic_unsafe_assume_single_core` cfg (`unsafe-assume-single-core` feature) \
357     is not compatible with target that supports atomic CAS;\n\
358     see also <https://github.com/taiki-e/portable-atomic/issues/148> for troubleshooting"
359);
360#[cfg(portable_atomic_unsafe_assume_single_core)]
361#[cfg_attr(portable_atomic_no_cfg_target_has_atomic, cfg(portable_atomic_no_atomic_cas))]
362#[cfg_attr(not(portable_atomic_no_cfg_target_has_atomic), cfg(not(target_has_atomic = "ptr")))]
363#[cfg(not(any(
364    target_arch = "arm",
365    target_arch = "avr",
366    target_arch = "msp430",
367    target_arch = "riscv32",
368    target_arch = "riscv64",
369    target_arch = "xtensa",
370)))]
371compile_error!(
372    "`portable_atomic_unsafe_assume_single_core` cfg (`unsafe-assume-single-core` feature) \
373     is not supported yet on this target;\n\
374     if you need unsafe-assume-single-core support for this target,\n\
375     please submit an issue at <https://github.com/taiki-e/portable-atomic>"
376);
377
378#[cfg(portable_atomic_no_outline_atomics)]
379#[cfg(not(any(
380    target_arch = "aarch64",
381    target_arch = "arm",
382    target_arch = "arm64ec",
383    target_arch = "powerpc64",
384    target_arch = "riscv32",
385    target_arch = "riscv64",
386    target_arch = "x86_64",
387)))]
388compile_error!("`portable_atomic_no_outline_atomics` cfg does not compatible with this target");
389#[cfg(portable_atomic_outline_atomics)]
390#[cfg(not(any(
391    target_arch = "aarch64",
392    target_arch = "powerpc64",
393    target_arch = "riscv32",
394    target_arch = "riscv64",
395)))]
396compile_error!("`portable_atomic_outline_atomics` cfg does not compatible with this target");
397
398#[cfg(portable_atomic_disable_fiq)]
399#[cfg(not(all(
400    target_arch = "arm",
401    not(any(target_feature = "mclass", portable_atomic_target_feature = "mclass")),
402)))]
403compile_error!(
404    "`portable_atomic_disable_fiq` cfg (`disable-fiq` feature) is only available on pre-v6 Arm"
405);
406#[cfg(portable_atomic_s_mode)]
407#[cfg(not(any(target_arch = "riscv32", target_arch = "riscv64")))]
408compile_error!("`portable_atomic_s_mode` cfg (`s-mode` feature) is only available on RISC-V");
409#[cfg(portable_atomic_force_amo)]
410#[cfg(not(any(target_arch = "riscv32", target_arch = "riscv64")))]
411compile_error!("`portable_atomic_force_amo` cfg (`force-amo` feature) is only available on RISC-V");
412
413#[cfg(portable_atomic_disable_fiq)]
414#[cfg(not(portable_atomic_unsafe_assume_single_core))]
415compile_error!(
416    "`portable_atomic_disable_fiq` cfg (`disable-fiq` feature) may only be used together with `portable_atomic_unsafe_assume_single_core` cfg (`unsafe-assume-single-core` feature)"
417);
418#[cfg(portable_atomic_s_mode)]
419#[cfg(not(portable_atomic_unsafe_assume_single_core))]
420compile_error!(
421    "`portable_atomic_s_mode` cfg (`s-mode` feature) may only be used together with `portable_atomic_unsafe_assume_single_core` cfg (`unsafe-assume-single-core` feature)"
422);
423#[cfg(portable_atomic_force_amo)]
424#[cfg(not(portable_atomic_unsafe_assume_single_core))]
425compile_error!(
426    "`portable_atomic_force_amo` cfg (`force-amo` feature) may only be used together with `portable_atomic_unsafe_assume_single_core` cfg (`unsafe-assume-single-core` feature)"
427);
428
429#[cfg(all(portable_atomic_unsafe_assume_single_core, feature = "critical-section"))]
430compile_error!(
431    "you may not enable `critical-section` feature and `portable_atomic_unsafe_assume_single_core` cfg (`unsafe-assume-single-core` feature) at the same time"
432);
433
434#[cfg(feature = "require-cas")]
435#[cfg_attr(
436    portable_atomic_no_cfg_target_has_atomic,
437    cfg(not(any(
438        not(portable_atomic_no_atomic_cas),
439        portable_atomic_unsafe_assume_single_core,
440        feature = "critical-section",
441        target_arch = "avr",
442        target_arch = "msp430",
443    )))
444)]
445#[cfg_attr(
446    not(portable_atomic_no_cfg_target_has_atomic),
447    cfg(not(any(
448        target_has_atomic = "ptr",
449        portable_atomic_unsafe_assume_single_core,
450        feature = "critical-section",
451        target_arch = "avr",
452        target_arch = "msp430",
453    )))
454)]
455compile_error!(
456    "dependents require atomic CAS but not available on this target by default;\n\
457    consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features.\n\
458    see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
459);
460
461#[cfg(any(test, feature = "std"))]
462extern crate std;
463
464#[macro_use]
465mod cfgs;
466#[cfg(target_pointer_width = "16")]
467pub use self::{cfg_has_atomic_16 as cfg_has_atomic_ptr, cfg_no_atomic_16 as cfg_no_atomic_ptr};
468#[cfg(target_pointer_width = "32")]
469pub use self::{cfg_has_atomic_32 as cfg_has_atomic_ptr, cfg_no_atomic_32 as cfg_no_atomic_ptr};
470#[cfg(target_pointer_width = "64")]
471pub use self::{cfg_has_atomic_64 as cfg_has_atomic_ptr, cfg_no_atomic_64 as cfg_no_atomic_ptr};
472#[cfg(target_pointer_width = "128")]
473pub use self::{cfg_has_atomic_128 as cfg_has_atomic_ptr, cfg_no_atomic_128 as cfg_no_atomic_ptr};
474
475#[macro_use]
476mod utils;
477
478#[cfg(test)]
479#[macro_use]
480mod tests;
481
482#[doc(no_inline)]
483pub use core::sync::atomic::Ordering;
484
485// LLVM doesn't support fence/compiler_fence for MSP430.
486#[cfg(target_arch = "msp430")]
487pub use self::imp::msp430::{compiler_fence, fence};
488#[doc(no_inline)]
489#[cfg(not(target_arch = "msp430"))]
490pub use core::sync::atomic::{compiler_fence, fence};
491
492mod imp;
493
494pub mod hint {
495    //! Re-export of the [`core::hint`] module.
496    //!
497    //! The only difference from the [`core::hint`] module is that [`spin_loop`]
498    //! is available in all rust versions that this crate supports.
499    //!
500    //! ```
501    //! use portable_atomic::hint;
502    //!
503    //! hint::spin_loop();
504    //! ```
505
506    #[doc(no_inline)]
507    pub use core::hint::*;
508
509    /// Emits a machine instruction to signal the processor that it is running in
510    /// a busy-wait spin-loop ("spin lock").
511    ///
512    /// Upon receiving the spin-loop signal the processor can optimize its behavior by,
513    /// for example, saving power or switching hyper-threads.
514    ///
515    /// This function is different from [`thread::yield_now`] which directly
516    /// yields to the system's scheduler, whereas `spin_loop` does not interact
517    /// with the operating system.
518    ///
519    /// A common use case for `spin_loop` is implementing bounded optimistic
520    /// spinning in a CAS loop in synchronization primitives. To avoid problems
521    /// like priority inversion, it is strongly recommended that the spin loop is
522    /// terminated after a finite amount of iterations and an appropriate blocking
523    /// syscall is made.
524    ///
525    /// **Note:** On platforms that do not support receiving spin-loop hints this
526    /// function does not do anything at all.
527    ///
528    /// [`thread::yield_now`]: https://doc.rust-lang.org/std/thread/fn.yield_now.html
529    #[inline]
530    pub fn spin_loop() {
531        #[allow(deprecated)]
532        core::sync::atomic::spin_loop_hint();
533    }
534}
535
536#[cfg(doc)]
537use core::sync::atomic::Ordering::{AcqRel, Acquire, Relaxed, Release, SeqCst};
538use core::{fmt, ptr};
539
540#[cfg(portable_atomic_no_strict_provenance)]
541#[cfg(miri)]
542use crate::utils::ptr::PtrExt as _;
543
544cfg_has_atomic_8! {
545/// A boolean type which can be safely shared between threads.
546///
547/// This type has the same in-memory representation as a [`bool`].
548///
549/// If the compiler and the platform support atomic loads and stores of `u8`,
550/// this type is a wrapper for the standard library's
551/// [`AtomicBool`](core::sync::atomic::AtomicBool). If the platform supports it
552/// but the compiler does not, atomic operations are implemented using inline
553/// assembly.
554#[repr(C, align(1))]
555pub struct AtomicBool {
556    v: core::cell::UnsafeCell<u8>,
557}
558
559impl Default for AtomicBool {
560    /// Creates an `AtomicBool` initialized to `false`.
561    #[inline]
562    fn default() -> Self {
563        Self::new(false)
564    }
565}
566
567impl From<bool> for AtomicBool {
568    /// Converts a `bool` into an `AtomicBool`.
569    #[inline]
570    fn from(b: bool) -> Self {
571        Self::new(b)
572    }
573}
574
575// Send is implicitly implemented.
576// SAFETY: any data races are prevented by disabling interrupts or
577// atomic intrinsics (see module-level comments).
578unsafe impl Sync for AtomicBool {}
579
580// UnwindSafe is implicitly implemented.
581#[cfg(not(portable_atomic_no_core_unwind_safe))]
582impl core::panic::RefUnwindSafe for AtomicBool {}
583#[cfg(all(portable_atomic_no_core_unwind_safe, feature = "std"))]
584impl std::panic::RefUnwindSafe for AtomicBool {}
585
586impl_debug_and_serde!(AtomicBool);
587
588impl AtomicBool {
589    /// Creates a new `AtomicBool`.
590    ///
591    /// # Examples
592    ///
593    /// ```
594    /// use portable_atomic::AtomicBool;
595    ///
596    /// let atomic_true = AtomicBool::new(true);
597    /// let atomic_false = AtomicBool::new(false);
598    /// ```
599    #[inline]
600    #[must_use]
601    pub const fn new(v: bool) -> Self {
602        static_assert_layout!(AtomicBool, bool);
603        Self { v: core::cell::UnsafeCell::new(v as u8) }
604    }
605
606    // TODO: update docs based on https://github.com/rust-lang/rust/pull/116762
607    const_fn! {
608        const_if: #[cfg(not(portable_atomic_no_const_mut_refs))];
609        /// Creates a new `AtomicBool` from a pointer.
610        ///
611        /// This is `const fn` on Rust 1.83+.
612        ///
613        /// # Safety
614        ///
615        /// * `ptr` must be aligned to `align_of::<AtomicBool>()` (note that on some platforms this can
616        ///   be bigger than `align_of::<bool>()`).
617        /// * `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`.
618        /// * If this atomic type is [lock-free](Self::is_lock_free), non-atomic accesses to the value
619        ///   behind `ptr` must have a happens-before relationship with atomic accesses via the returned
620        ///   value (or vice-versa).
621        ///   * In other words, time periods where the value is accessed atomically may not overlap
622        ///     with periods where the value is accessed non-atomically.
623        ///   * This requirement is trivially satisfied if `ptr` is never used non-atomically for the
624        ///     duration of lifetime `'a`. Most use cases should be able to follow this guideline.
625        ///   * This requirement is also trivially satisfied if all accesses (atomic or not) are done
626        ///     from the same thread.
627        /// * If this atomic type is *not* lock-free:
628        ///   * Any accesses to the value behind `ptr` must have a happens-before relationship
629        ///     with accesses via the returned value (or vice-versa).
630        ///   * Any concurrent accesses to the value behind `ptr` for the duration of lifetime `'a` must
631        ///     be compatible with operations performed by this atomic type.
632        /// * This method must not be used to create overlapping or mixed-size atomic accesses, as
633        ///   these are not supported by the memory model.
634        ///
635        /// [valid]: core::ptr#safety
636        #[inline]
637        #[must_use]
638        pub const unsafe fn from_ptr<'a>(ptr: *mut bool) -> &'a Self {
639            #[allow(clippy::cast_ptr_alignment)]
640            // SAFETY: guaranteed by the caller
641            unsafe { &*(ptr as *mut Self) }
642        }
643    }
644
645    /// Returns `true` if operations on values of this type are lock-free.
646    ///
647    /// If the compiler or the platform doesn't support the necessary
648    /// atomic instructions, global locks for every potentially
649    /// concurrent atomic operation will be used.
650    ///
651    /// # Examples
652    ///
653    /// ```
654    /// use portable_atomic::AtomicBool;
655    ///
656    /// let is_lock_free = AtomicBool::is_lock_free();
657    /// ```
658    #[inline]
659    #[must_use]
660    pub fn is_lock_free() -> bool {
661        imp::AtomicU8::is_lock_free()
662    }
663
664    /// Returns `true` if operations on values of this type are lock-free.
665    ///
666    /// If the compiler or the platform doesn't support the necessary
667    /// atomic instructions, global locks for every potentially
668    /// concurrent atomic operation will be used.
669    ///
670    /// **Note:** If the atomic operation relies on dynamic CPU feature detection,
671    /// this type may be lock-free even if the function returns false.
672    ///
673    /// # Examples
674    ///
675    /// ```
676    /// use portable_atomic::AtomicBool;
677    ///
678    /// const IS_ALWAYS_LOCK_FREE: bool = AtomicBool::is_always_lock_free();
679    /// ```
680    #[inline]
681    #[must_use]
682    pub const fn is_always_lock_free() -> bool {
683        imp::AtomicU8::IS_ALWAYS_LOCK_FREE
684    }
685    #[cfg(test)]
686    const IS_ALWAYS_LOCK_FREE: bool = Self::is_always_lock_free();
687
688    const_fn! {
689        const_if: #[cfg(not(portable_atomic_no_const_mut_refs))];
690        /// Returns a mutable reference to the underlying [`bool`].
691        ///
692        /// This is safe because the mutable reference guarantees that no other threads are
693        /// concurrently accessing the atomic data.
694        ///
695        /// This is `const fn` on Rust 1.83+.
696        ///
697        /// # Examples
698        ///
699        /// ```
700        /// use portable_atomic::{AtomicBool, Ordering};
701        ///
702        /// let mut some_bool = AtomicBool::new(true);
703        /// assert_eq!(*some_bool.get_mut(), true);
704        /// *some_bool.get_mut() = false;
705        /// assert_eq!(some_bool.load(Ordering::SeqCst), false);
706        /// ```
707        #[inline]
708        pub const fn get_mut(&mut self) -> &mut bool {
709            // SAFETY: the mutable reference guarantees unique ownership.
710            unsafe { &mut *self.as_ptr() }
711        }
712    }
713
714    // TODO: Add from_mut/get_mut_slice/from_mut_slice once it is stable on std atomic types.
715    // https://github.com/rust-lang/rust/issues/76314
716
717    const_fn! {
718        const_if: #[cfg(not(portable_atomic_no_const_transmute))];
719        /// Consumes the atomic and returns the contained value.
720        ///
721        /// This is safe because passing `self` by value guarantees that no other threads are
722        /// concurrently accessing the atomic data.
723        ///
724        /// This is `const fn` on Rust 1.56+.
725        ///
726        /// # Examples
727        ///
728        /// ```
729        /// use portable_atomic::AtomicBool;
730        ///
731        /// let some_bool = AtomicBool::new(true);
732        /// assert_eq!(some_bool.into_inner(), true);
733        /// ```
734        #[inline]
735        pub const fn into_inner(self) -> bool {
736            // SAFETY: AtomicBool and u8 have the same size and in-memory representations,
737            // so they can be safely transmuted.
738            // (const UnsafeCell::into_inner is unstable)
739            unsafe { core::mem::transmute::<AtomicBool, u8>(self) != 0 }
740        }
741    }
742
743    /// Loads a value from the bool.
744    ///
745    /// `load` takes an [`Ordering`] argument which describes the memory ordering
746    /// of this operation. Possible values are [`SeqCst`], [`Acquire`] and [`Relaxed`].
747    ///
748    /// # Panics
749    ///
750    /// Panics if `order` is [`Release`] or [`AcqRel`].
751    ///
752    /// # Examples
753    ///
754    /// ```
755    /// use portable_atomic::{AtomicBool, Ordering};
756    ///
757    /// let some_bool = AtomicBool::new(true);
758    ///
759    /// assert_eq!(some_bool.load(Ordering::Relaxed), true);
760    /// ```
761    #[inline]
762    #[cfg_attr(
763        any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
764        track_caller
765    )]
766    pub fn load(&self, order: Ordering) -> bool {
767        self.as_atomic_u8().load(order) != 0
768    }
769
770    /// Stores a value into the bool.
771    ///
772    /// `store` takes an [`Ordering`] argument which describes the memory ordering
773    /// of this operation. Possible values are [`SeqCst`], [`Release`] and [`Relaxed`].
774    ///
775    /// # Panics
776    ///
777    /// Panics if `order` is [`Acquire`] or [`AcqRel`].
778    ///
779    /// # Examples
780    ///
781    /// ```
782    /// use portable_atomic::{AtomicBool, Ordering};
783    ///
784    /// let some_bool = AtomicBool::new(true);
785    ///
786    /// some_bool.store(false, Ordering::Relaxed);
787    /// assert_eq!(some_bool.load(Ordering::Relaxed), false);
788    /// ```
789    #[inline]
790    #[cfg_attr(
791        any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
792        track_caller
793    )]
794    pub fn store(&self, val: bool, order: Ordering) {
795        self.as_atomic_u8().store(val as u8, order);
796    }
797
798    cfg_has_atomic_cas_or_amo32! {
799    /// Stores a value into the bool, returning the previous value.
800    ///
801    /// `swap` takes an [`Ordering`] argument which describes the memory ordering
802    /// of this operation. All ordering modes are possible. Note that using
803    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
804    /// using [`Release`] makes the load part [`Relaxed`].
805    ///
806    /// # Examples
807    ///
808    /// ```
809    /// use portable_atomic::{AtomicBool, Ordering};
810    ///
811    /// let some_bool = AtomicBool::new(true);
812    ///
813    /// assert_eq!(some_bool.swap(false, Ordering::Relaxed), true);
814    /// assert_eq!(some_bool.load(Ordering::Relaxed), false);
815    /// ```
816    #[inline]
817    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
818    pub fn swap(&self, val: bool, order: Ordering) -> bool {
819        #[cfg(any(
820            target_arch = "riscv32",
821            target_arch = "riscv64",
822            target_arch = "loongarch32",
823            target_arch = "loongarch64",
824        ))]
825        {
826            // See https://github.com/rust-lang/rust/pull/114034 for details.
827            // https://github.com/rust-lang/rust/blob/1.84.0/library/core/src/sync/atomic.rs#L249
828            // https://godbolt.org/z/ofbGGdx44
829            if val { self.fetch_or(true, order) } else { self.fetch_and(false, order) }
830        }
831        #[cfg(not(any(
832            target_arch = "riscv32",
833            target_arch = "riscv64",
834            target_arch = "loongarch32",
835            target_arch = "loongarch64",
836        )))]
837        {
838            self.as_atomic_u8().swap(val as u8, order) != 0
839        }
840    }
841
842    /// Stores a value into the [`bool`] if the current value is the same as the `current` value.
843    ///
844    /// The return value is a result indicating whether the new value was written and containing
845    /// the previous value. On success this value is guaranteed to be equal to `current`.
846    ///
847    /// `compare_exchange` takes two [`Ordering`] arguments to describe the memory
848    /// ordering of this operation. `success` describes the required ordering for the
849    /// read-modify-write operation that takes place if the comparison with `current` succeeds.
850    /// `failure` describes the required ordering for the load operation that takes place when
851    /// the comparison fails. Using [`Acquire`] as success ordering makes the store part
852    /// of this operation [`Relaxed`], and using [`Release`] makes the successful load
853    /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
854    ///
855    /// # Panics
856    ///
857    /// Panics if `failure` is [`Release`], [`AcqRel`].
858    ///
859    /// # Examples
860    ///
861    /// ```
862    /// use portable_atomic::{AtomicBool, Ordering};
863    ///
864    /// let some_bool = AtomicBool::new(true);
865    ///
866    /// assert_eq!(
867    ///     some_bool.compare_exchange(true, false, Ordering::Acquire, Ordering::Relaxed),
868    ///     Ok(true)
869    /// );
870    /// assert_eq!(some_bool.load(Ordering::Relaxed), false);
871    ///
872    /// assert_eq!(
873    ///     some_bool.compare_exchange(true, true, Ordering::SeqCst, Ordering::Acquire),
874    ///     Err(false)
875    /// );
876    /// assert_eq!(some_bool.load(Ordering::Relaxed), false);
877    /// ```
878    #[cfg_attr(docsrs, doc(alias = "compare_and_swap"))]
879    #[inline]
880    #[cfg_attr(
881        any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
882        track_caller
883    )]
884    pub fn compare_exchange(
885        &self,
886        current: bool,
887        new: bool,
888        success: Ordering,
889        failure: Ordering,
890    ) -> Result<bool, bool> {
891        #[cfg(any(
892            target_arch = "riscv32",
893            target_arch = "riscv64",
894            target_arch = "loongarch32",
895            target_arch = "loongarch64",
896        ))]
897        {
898            // See https://github.com/rust-lang/rust/pull/114034 for details.
899            // https://github.com/rust-lang/rust/blob/1.84.0/library/core/src/sync/atomic.rs#L249
900            // https://godbolt.org/z/ofbGGdx44
901            crate::utils::assert_compare_exchange_ordering(success, failure);
902            let order = crate::utils::upgrade_success_ordering(success, failure);
903            let old = if current == new {
904                // This is a no-op, but we still need to perform the operation
905                // for memory ordering reasons.
906                self.fetch_or(false, order)
907            } else {
908                // This sets the value to the new one and returns the old one.
909                self.swap(new, order)
910            };
911            if old == current { Ok(old) } else { Err(old) }
912        }
913        #[cfg(not(any(
914            target_arch = "riscv32",
915            target_arch = "riscv64",
916            target_arch = "loongarch32",
917            target_arch = "loongarch64",
918        )))]
919        {
920            match self.as_atomic_u8().compare_exchange(current as u8, new as u8, success, failure) {
921                Ok(x) => Ok(x != 0),
922                Err(x) => Err(x != 0),
923            }
924        }
925    }
926
927    /// Stores a value into the [`bool`] if the current value is the same as the `current` value.
928    ///
929    /// Unlike [`AtomicBool::compare_exchange`], this function is allowed to spuriously fail even when the
930    /// comparison succeeds, which can result in more efficient code on some platforms. The
931    /// return value is a result indicating whether the new value was written and containing the
932    /// previous value.
933    ///
934    /// `compare_exchange_weak` takes two [`Ordering`] arguments to describe the memory
935    /// ordering of this operation. `success` describes the required ordering for the
936    /// read-modify-write operation that takes place if the comparison with `current` succeeds.
937    /// `failure` describes the required ordering for the load operation that takes place when
938    /// the comparison fails. Using [`Acquire`] as success ordering makes the store part
939    /// of this operation [`Relaxed`], and using [`Release`] makes the successful load
940    /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
941    ///
942    /// # Panics
943    ///
944    /// Panics if `failure` is [`Release`], [`AcqRel`].
945    ///
946    /// # Examples
947    ///
948    /// ```
949    /// use portable_atomic::{AtomicBool, Ordering};
950    ///
951    /// let val = AtomicBool::new(false);
952    ///
953    /// let new = true;
954    /// let mut old = val.load(Ordering::Relaxed);
955    /// loop {
956    ///     match val.compare_exchange_weak(old, new, Ordering::SeqCst, Ordering::Relaxed) {
957    ///         Ok(_) => break,
958    ///         Err(x) => old = x,
959    ///     }
960    /// }
961    /// ```
962    #[cfg_attr(docsrs, doc(alias = "compare_and_swap"))]
963    #[inline]
964    #[cfg_attr(
965        any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
966        track_caller
967    )]
968    pub fn compare_exchange_weak(
969        &self,
970        current: bool,
971        new: bool,
972        success: Ordering,
973        failure: Ordering,
974    ) -> Result<bool, bool> {
975        #[cfg(any(
976            target_arch = "riscv32",
977            target_arch = "riscv64",
978            target_arch = "loongarch32",
979            target_arch = "loongarch64",
980        ))]
981        {
982            // See https://github.com/rust-lang/rust/pull/114034 for details.
983            // https://github.com/rust-lang/rust/blob/1.84.0/library/core/src/sync/atomic.rs#L249
984            // https://godbolt.org/z/ofbGGdx44
985            self.compare_exchange(current, new, success, failure)
986        }
987        #[cfg(not(any(
988            target_arch = "riscv32",
989            target_arch = "riscv64",
990            target_arch = "loongarch32",
991            target_arch = "loongarch64",
992        )))]
993        {
994            match self
995                .as_atomic_u8()
996                .compare_exchange_weak(current as u8, new as u8, success, failure)
997            {
998                Ok(x) => Ok(x != 0),
999                Err(x) => Err(x != 0),
1000            }
1001        }
1002    }
1003
1004    /// Logical "and" with a boolean value.
1005    ///
1006    /// Performs a logical "and" operation on the current value and the argument `val`, and sets
1007    /// the new value to the result.
1008    ///
1009    /// Returns the previous value.
1010    ///
1011    /// `fetch_and` takes an [`Ordering`] argument which describes the memory ordering
1012    /// of this operation. All ordering modes are possible. Note that using
1013    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1014    /// using [`Release`] makes the load part [`Relaxed`].
1015    ///
1016    /// # Examples
1017    ///
1018    /// ```
1019    /// use portable_atomic::{AtomicBool, Ordering};
1020    ///
1021    /// let foo = AtomicBool::new(true);
1022    /// assert_eq!(foo.fetch_and(false, Ordering::SeqCst), true);
1023    /// assert_eq!(foo.load(Ordering::SeqCst), false);
1024    ///
1025    /// let foo = AtomicBool::new(true);
1026    /// assert_eq!(foo.fetch_and(true, Ordering::SeqCst), true);
1027    /// assert_eq!(foo.load(Ordering::SeqCst), true);
1028    ///
1029    /// let foo = AtomicBool::new(false);
1030    /// assert_eq!(foo.fetch_and(false, Ordering::SeqCst), false);
1031    /// assert_eq!(foo.load(Ordering::SeqCst), false);
1032    /// ```
1033    #[inline]
1034    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1035    pub fn fetch_and(&self, val: bool, order: Ordering) -> bool {
1036        self.as_atomic_u8().fetch_and(val as u8, order) != 0
1037    }
1038
1039    /// Logical "and" with a boolean value.
1040    ///
1041    /// Performs a logical "and" operation on the current value and the argument `val`, and sets
1042    /// the new value to the result.
1043    ///
1044    /// Unlike `fetch_and`, this does not return the previous value.
1045    ///
1046    /// `and` takes an [`Ordering`] argument which describes the memory ordering
1047    /// of this operation. All ordering modes are possible. Note that using
1048    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1049    /// using [`Release`] makes the load part [`Relaxed`].
1050    ///
1051    /// This function may generate more efficient code than `fetch_and` on some platforms.
1052    ///
1053    /// - x86/x86_64: `lock and` instead of `cmpxchg` loop
1054    /// - MSP430: `and` instead of disabling interrupts
1055    ///
1056    /// Note: On x86/x86_64, the use of either function should not usually
1057    /// affect the generated code, because LLVM can properly optimize the case
1058    /// where the result is unused.
1059    ///
1060    /// # Examples
1061    ///
1062    /// ```
1063    /// use portable_atomic::{AtomicBool, Ordering};
1064    ///
1065    /// let foo = AtomicBool::new(true);
1066    /// foo.and(false, Ordering::SeqCst);
1067    /// assert_eq!(foo.load(Ordering::SeqCst), false);
1068    ///
1069    /// let foo = AtomicBool::new(true);
1070    /// foo.and(true, Ordering::SeqCst);
1071    /// assert_eq!(foo.load(Ordering::SeqCst), true);
1072    ///
1073    /// let foo = AtomicBool::new(false);
1074    /// foo.and(false, Ordering::SeqCst);
1075    /// assert_eq!(foo.load(Ordering::SeqCst), false);
1076    /// ```
1077    #[inline]
1078    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1079    pub fn and(&self, val: bool, order: Ordering) {
1080        self.as_atomic_u8().and(val as u8, order);
1081    }
1082
1083    /// Logical "nand" with a boolean value.
1084    ///
1085    /// Performs a logical "nand" operation on the current value and the argument `val`, and sets
1086    /// the new value to the result.
1087    ///
1088    /// Returns the previous value.
1089    ///
1090    /// `fetch_nand` takes an [`Ordering`] argument which describes the memory ordering
1091    /// of this operation. All ordering modes are possible. Note that using
1092    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1093    /// using [`Release`] makes the load part [`Relaxed`].
1094    ///
1095    /// # Examples
1096    ///
1097    /// ```
1098    /// use portable_atomic::{AtomicBool, Ordering};
1099    ///
1100    /// let foo = AtomicBool::new(true);
1101    /// assert_eq!(foo.fetch_nand(false, Ordering::SeqCst), true);
1102    /// assert_eq!(foo.load(Ordering::SeqCst), true);
1103    ///
1104    /// let foo = AtomicBool::new(true);
1105    /// assert_eq!(foo.fetch_nand(true, Ordering::SeqCst), true);
1106    /// assert_eq!(foo.load(Ordering::SeqCst) as usize, 0);
1107    /// assert_eq!(foo.load(Ordering::SeqCst), false);
1108    ///
1109    /// let foo = AtomicBool::new(false);
1110    /// assert_eq!(foo.fetch_nand(false, Ordering::SeqCst), false);
1111    /// assert_eq!(foo.load(Ordering::SeqCst), true);
1112    /// ```
1113    #[inline]
1114    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1115    pub fn fetch_nand(&self, val: bool, order: Ordering) -> bool {
1116        // https://github.com/rust-lang/rust/blob/1.84.0/library/core/src/sync/atomic.rs#L973-L985
1117        if val {
1118            // !(x & true) == !x
1119            // We must invert the bool.
1120            self.fetch_xor(true, order)
1121        } else {
1122            // !(x & false) == true
1123            // We must set the bool to true.
1124            self.swap(true, order)
1125        }
1126    }
1127
1128    /// Logical "or" with a boolean value.
1129    ///
1130    /// Performs a logical "or" operation on the current value and the argument `val`, and sets the
1131    /// new value to the result.
1132    ///
1133    /// Returns the previous value.
1134    ///
1135    /// `fetch_or` takes an [`Ordering`] argument which describes the memory ordering
1136    /// of this operation. All ordering modes are possible. Note that using
1137    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1138    /// using [`Release`] makes the load part [`Relaxed`].
1139    ///
1140    /// # Examples
1141    ///
1142    /// ```
1143    /// use portable_atomic::{AtomicBool, Ordering};
1144    ///
1145    /// let foo = AtomicBool::new(true);
1146    /// assert_eq!(foo.fetch_or(false, Ordering::SeqCst), true);
1147    /// assert_eq!(foo.load(Ordering::SeqCst), true);
1148    ///
1149    /// let foo = AtomicBool::new(true);
1150    /// assert_eq!(foo.fetch_or(true, Ordering::SeqCst), true);
1151    /// assert_eq!(foo.load(Ordering::SeqCst), true);
1152    ///
1153    /// let foo = AtomicBool::new(false);
1154    /// assert_eq!(foo.fetch_or(false, Ordering::SeqCst), false);
1155    /// assert_eq!(foo.load(Ordering::SeqCst), false);
1156    /// ```
1157    #[inline]
1158    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1159    pub fn fetch_or(&self, val: bool, order: Ordering) -> bool {
1160        self.as_atomic_u8().fetch_or(val as u8, order) != 0
1161    }
1162
1163    /// Logical "or" with a boolean value.
1164    ///
1165    /// Performs a logical "or" operation on the current value and the argument `val`, and sets the
1166    /// new value to the result.
1167    ///
1168    /// Unlike `fetch_or`, this does not return the previous value.
1169    ///
1170    /// `or` takes an [`Ordering`] argument which describes the memory ordering
1171    /// of this operation. All ordering modes are possible. Note that using
1172    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1173    /// using [`Release`] makes the load part [`Relaxed`].
1174    ///
1175    /// This function may generate more efficient code than `fetch_or` on some platforms.
1176    ///
1177    /// - x86/x86_64: `lock or` instead of `cmpxchg` loop
1178    /// - MSP430: `bis` instead of disabling interrupts
1179    ///
1180    /// Note: On x86/x86_64, the use of either function should not usually
1181    /// affect the generated code, because LLVM can properly optimize the case
1182    /// where the result is unused.
1183    ///
1184    /// # Examples
1185    ///
1186    /// ```
1187    /// use portable_atomic::{AtomicBool, Ordering};
1188    ///
1189    /// let foo = AtomicBool::new(true);
1190    /// foo.or(false, Ordering::SeqCst);
1191    /// assert_eq!(foo.load(Ordering::SeqCst), true);
1192    ///
1193    /// let foo = AtomicBool::new(true);
1194    /// foo.or(true, Ordering::SeqCst);
1195    /// assert_eq!(foo.load(Ordering::SeqCst), true);
1196    ///
1197    /// let foo = AtomicBool::new(false);
1198    /// foo.or(false, Ordering::SeqCst);
1199    /// assert_eq!(foo.load(Ordering::SeqCst), false);
1200    /// ```
1201    #[inline]
1202    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1203    pub fn or(&self, val: bool, order: Ordering) {
1204        self.as_atomic_u8().or(val as u8, order);
1205    }
1206
1207    /// Logical "xor" with a boolean value.
1208    ///
1209    /// Performs a logical "xor" operation on the current value and the argument `val`, and sets
1210    /// the new value to the result.
1211    ///
1212    /// Returns the previous value.
1213    ///
1214    /// `fetch_xor` takes an [`Ordering`] argument which describes the memory ordering
1215    /// of this operation. All ordering modes are possible. Note that using
1216    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1217    /// using [`Release`] makes the load part [`Relaxed`].
1218    ///
1219    /// # Examples
1220    ///
1221    /// ```
1222    /// use portable_atomic::{AtomicBool, Ordering};
1223    ///
1224    /// let foo = AtomicBool::new(true);
1225    /// assert_eq!(foo.fetch_xor(false, Ordering::SeqCst), true);
1226    /// assert_eq!(foo.load(Ordering::SeqCst), true);
1227    ///
1228    /// let foo = AtomicBool::new(true);
1229    /// assert_eq!(foo.fetch_xor(true, Ordering::SeqCst), true);
1230    /// assert_eq!(foo.load(Ordering::SeqCst), false);
1231    ///
1232    /// let foo = AtomicBool::new(false);
1233    /// assert_eq!(foo.fetch_xor(false, Ordering::SeqCst), false);
1234    /// assert_eq!(foo.load(Ordering::SeqCst), false);
1235    /// ```
1236    #[inline]
1237    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1238    pub fn fetch_xor(&self, val: bool, order: Ordering) -> bool {
1239        self.as_atomic_u8().fetch_xor(val as u8, order) != 0
1240    }
1241
1242    /// Logical "xor" with a boolean value.
1243    ///
1244    /// Performs a logical "xor" operation on the current value and the argument `val`, and sets
1245    /// the new value to the result.
1246    ///
1247    /// Unlike `fetch_xor`, this does not return the previous value.
1248    ///
1249    /// `xor` takes an [`Ordering`] argument which describes the memory ordering
1250    /// of this operation. All ordering modes are possible. Note that using
1251    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1252    /// using [`Release`] makes the load part [`Relaxed`].
1253    ///
1254    /// This function may generate more efficient code than `fetch_xor` on some platforms.
1255    ///
1256    /// - x86/x86_64: `lock xor` instead of `cmpxchg` loop
1257    /// - MSP430: `xor` instead of disabling interrupts
1258    ///
1259    /// Note: On x86/x86_64, the use of either function should not usually
1260    /// affect the generated code, because LLVM can properly optimize the case
1261    /// where the result is unused.
1262    ///
1263    /// # Examples
1264    ///
1265    /// ```
1266    /// use portable_atomic::{AtomicBool, Ordering};
1267    ///
1268    /// let foo = AtomicBool::new(true);
1269    /// foo.xor(false, Ordering::SeqCst);
1270    /// assert_eq!(foo.load(Ordering::SeqCst), true);
1271    ///
1272    /// let foo = AtomicBool::new(true);
1273    /// foo.xor(true, Ordering::SeqCst);
1274    /// assert_eq!(foo.load(Ordering::SeqCst), false);
1275    ///
1276    /// let foo = AtomicBool::new(false);
1277    /// foo.xor(false, Ordering::SeqCst);
1278    /// assert_eq!(foo.load(Ordering::SeqCst), false);
1279    /// ```
1280    #[inline]
1281    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1282    pub fn xor(&self, val: bool, order: Ordering) {
1283        self.as_atomic_u8().xor(val as u8, order);
1284    }
1285
1286    /// Logical "not" with a boolean value.
1287    ///
1288    /// Performs a logical "not" operation on the current value, and sets
1289    /// the new value to the result.
1290    ///
1291    /// Returns the previous value.
1292    ///
1293    /// `fetch_not` takes an [`Ordering`] argument which describes the memory ordering
1294    /// of this operation. All ordering modes are possible. Note that using
1295    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1296    /// using [`Release`] makes the load part [`Relaxed`].
1297    ///
1298    /// # Examples
1299    ///
1300    /// ```
1301    /// use portable_atomic::{AtomicBool, Ordering};
1302    ///
1303    /// let foo = AtomicBool::new(true);
1304    /// assert_eq!(foo.fetch_not(Ordering::SeqCst), true);
1305    /// assert_eq!(foo.load(Ordering::SeqCst), false);
1306    ///
1307    /// let foo = AtomicBool::new(false);
1308    /// assert_eq!(foo.fetch_not(Ordering::SeqCst), false);
1309    /// assert_eq!(foo.load(Ordering::SeqCst), true);
1310    /// ```
1311    #[inline]
1312    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1313    pub fn fetch_not(&self, order: Ordering) -> bool {
1314        self.fetch_xor(true, order)
1315    }
1316
1317    /// Logical "not" with a boolean value.
1318    ///
1319    /// Performs a logical "not" operation on the current value, and sets
1320    /// the new value to the result.
1321    ///
1322    /// Unlike `fetch_not`, this does not return the previous value.
1323    ///
1324    /// `not` takes an [`Ordering`] argument which describes the memory ordering
1325    /// of this operation. All ordering modes are possible. Note that using
1326    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1327    /// using [`Release`] makes the load part [`Relaxed`].
1328    ///
1329    /// This function may generate more efficient code than `fetch_not` on some platforms.
1330    ///
1331    /// - x86/x86_64: `lock xor` instead of `cmpxchg` loop
1332    /// - MSP430: `xor` instead of disabling interrupts
1333    ///
1334    /// Note: On x86/x86_64, the use of either function should not usually
1335    /// affect the generated code, because LLVM can properly optimize the case
1336    /// where the result is unused.
1337    ///
1338    /// # Examples
1339    ///
1340    /// ```
1341    /// use portable_atomic::{AtomicBool, Ordering};
1342    ///
1343    /// let foo = AtomicBool::new(true);
1344    /// foo.not(Ordering::SeqCst);
1345    /// assert_eq!(foo.load(Ordering::SeqCst), false);
1346    ///
1347    /// let foo = AtomicBool::new(false);
1348    /// foo.not(Ordering::SeqCst);
1349    /// assert_eq!(foo.load(Ordering::SeqCst), true);
1350    /// ```
1351    #[inline]
1352    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1353    pub fn not(&self, order: Ordering) {
1354        self.xor(true, order);
1355    }
1356
1357    /// Fetches the value, and applies a function to it that returns an optional
1358    /// new value. Returns a `Result` of `Ok(previous_value)` if the function
1359    /// returned `Some(_)`, else `Err(previous_value)`.
1360    ///
1361    /// Note: This may call the function multiple times if the value has been
1362    /// changed from other threads in the meantime, as long as the function
1363    /// returns `Some(_)`, but the function will have been applied only once to
1364    /// the stored value.
1365    ///
1366    /// `fetch_update` takes two [`Ordering`] arguments to describe the memory
1367    /// ordering of this operation. The first describes the required ordering for
1368    /// when the operation finally succeeds while the second describes the
1369    /// required ordering for loads. These correspond to the success and failure
1370    /// orderings of [`compare_exchange`](Self::compare_exchange) respectively.
1371    ///
1372    /// Using [`Acquire`] as success ordering makes the store part of this
1373    /// operation [`Relaxed`], and using [`Release`] makes the final successful
1374    /// load [`Relaxed`]. The (failed) load ordering can only be [`SeqCst`],
1375    /// [`Acquire`] or [`Relaxed`].
1376    ///
1377    /// # Considerations
1378    ///
1379    /// This method is not magic; it is not provided by the hardware.
1380    /// It is implemented in terms of [`compare_exchange_weak`](Self::compare_exchange_weak),
1381    /// and suffers from the same drawbacks.
1382    /// In particular, this method will not circumvent the [ABA Problem].
1383    ///
1384    /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
1385    ///
1386    /// # Panics
1387    ///
1388    /// Panics if `fetch_order` is [`Release`], [`AcqRel`].
1389    ///
1390    /// # Examples
1391    ///
1392    /// ```
1393    /// use portable_atomic::{AtomicBool, Ordering};
1394    ///
1395    /// let x = AtomicBool::new(false);
1396    /// assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(false));
1397    /// assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(!x)), Ok(false));
1398    /// assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(!x)), Ok(true));
1399    /// assert_eq!(x.load(Ordering::SeqCst), false);
1400    /// ```
1401    #[inline]
1402    #[cfg_attr(
1403        any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
1404        track_caller
1405    )]
1406    pub fn fetch_update<F>(
1407        &self,
1408        set_order: Ordering,
1409        fetch_order: Ordering,
1410        mut f: F,
1411    ) -> Result<bool, bool>
1412    where
1413        F: FnMut(bool) -> Option<bool>,
1414    {
1415        let mut prev = self.load(fetch_order);
1416        while let Some(next) = f(prev) {
1417            match self.compare_exchange_weak(prev, next, set_order, fetch_order) {
1418                x @ Ok(_) => return x,
1419                Err(next_prev) => prev = next_prev,
1420            }
1421        }
1422        Err(prev)
1423    }
1424    } // cfg_has_atomic_cas_or_amo32!
1425
1426    const_fn! {
1427        // This function is actually `const fn`-compatible on Rust 1.32+,
1428        // but makes `const fn` only on Rust 1.58+ to match other atomic types.
1429        const_if: #[cfg(not(portable_atomic_no_const_raw_ptr_deref))];
1430        /// Returns a mutable pointer to the underlying [`bool`].
1431        ///
1432        /// Returning an `*mut` pointer from a shared reference to this atomic is
1433        /// safe because the atomic types work with interior mutability. Any use of
1434        /// the returned raw pointer requires an `unsafe` block and has to uphold
1435        /// the safety requirements. If there is concurrent access, note the following
1436        /// additional safety requirements:
1437        ///
1438        /// - If this atomic type is [lock-free](Self::is_lock_free), any concurrent
1439        ///   operations on it must be atomic.
1440        /// - Otherwise, any concurrent operations on it must be compatible with
1441        ///   operations performed by this atomic type.
1442        ///
1443        /// This is `const fn` on Rust 1.58+.
1444        #[inline]
1445        pub const fn as_ptr(&self) -> *mut bool {
1446            self.v.get() as *mut bool
1447        }
1448    }
1449
1450    #[inline(always)]
1451    fn as_atomic_u8(&self) -> &imp::AtomicU8 {
1452        // SAFETY: AtomicBool and imp::AtomicU8 have the same layout,
1453        // and both access data in the same way.
1454        unsafe { &*(self as *const Self as *const imp::AtomicU8) }
1455    }
1456}
1457// See https://github.com/taiki-e/portable-atomic/issues/180
1458#[cfg(not(feature = "require-cas"))]
1459cfg_no_atomic_cas! {
1460#[doc(hidden)]
1461#[allow(unused_variables, clippy::unused_self, clippy::extra_unused_lifetimes)]
1462impl<'a> AtomicBool {
1463    cfg_no_atomic_cas_or_amo32! {
1464    #[inline]
1465    pub fn swap(&self, val: bool, order: Ordering) -> bool
1466    where
1467        &'a Self: HasSwap,
1468    {
1469        unimplemented!()
1470    }
1471    #[inline]
1472    pub fn compare_exchange(
1473        &self,
1474        current: bool,
1475        new: bool,
1476        success: Ordering,
1477        failure: Ordering,
1478    ) -> Result<bool, bool>
1479    where
1480        &'a Self: HasCompareExchange,
1481    {
1482        unimplemented!()
1483    }
1484    #[inline]
1485    pub fn compare_exchange_weak(
1486        &self,
1487        current: bool,
1488        new: bool,
1489        success: Ordering,
1490        failure: Ordering,
1491    ) -> Result<bool, bool>
1492    where
1493        &'a Self: HasCompareExchangeWeak,
1494    {
1495        unimplemented!()
1496    }
1497    #[inline]
1498    pub fn fetch_and(&self, val: bool, order: Ordering) -> bool
1499    where
1500        &'a Self: HasFetchAnd,
1501    {
1502        unimplemented!()
1503    }
1504    #[inline]
1505    pub fn and(&self, val: bool, order: Ordering)
1506    where
1507        &'a Self: HasAnd,
1508    {
1509        unimplemented!()
1510    }
1511    #[inline]
1512    pub fn fetch_nand(&self, val: bool, order: Ordering) -> bool
1513    where
1514        &'a Self: HasFetchNand,
1515    {
1516        unimplemented!()
1517    }
1518    #[inline]
1519    pub fn fetch_or(&self, val: bool, order: Ordering) -> bool
1520    where
1521        &'a Self: HasFetchOr,
1522    {
1523        unimplemented!()
1524    }
1525    #[inline]
1526    pub fn or(&self, val: bool, order: Ordering)
1527    where
1528        &'a Self: HasOr,
1529    {
1530        unimplemented!()
1531    }
1532    #[inline]
1533    pub fn fetch_xor(&self, val: bool, order: Ordering) -> bool
1534    where
1535        &'a Self: HasFetchXor,
1536    {
1537        unimplemented!()
1538    }
1539    #[inline]
1540    pub fn xor(&self, val: bool, order: Ordering)
1541    where
1542        &'a Self: HasXor,
1543    {
1544        unimplemented!()
1545    }
1546    #[inline]
1547    pub fn fetch_not(&self, order: Ordering) -> bool
1548    where
1549        &'a Self: HasFetchNot,
1550    {
1551        unimplemented!()
1552    }
1553    #[inline]
1554    pub fn not(&self, order: Ordering)
1555    where
1556        &'a Self: HasNot,
1557    {
1558        unimplemented!()
1559    }
1560    #[inline]
1561    pub fn fetch_update<F>(
1562        &self,
1563        set_order: Ordering,
1564        fetch_order: Ordering,
1565        f: F,
1566    ) -> Result<bool, bool>
1567    where
1568        F: FnMut(bool) -> Option<bool>,
1569        &'a Self: HasFetchUpdate,
1570    {
1571        unimplemented!()
1572    }
1573    } // cfg_no_atomic_cas_or_amo32!
1574}
1575} // cfg_no_atomic_cas!
1576} // cfg_has_atomic_8!
1577
1578cfg_has_atomic_ptr! {
1579/// A raw pointer type which can be safely shared between threads.
1580///
1581/// This type has the same in-memory representation as a `*mut T`.
1582///
1583/// If the compiler and the platform support atomic loads and stores of pointers,
1584/// this type is a wrapper for the standard library's
1585/// [`AtomicPtr`](core::sync::atomic::AtomicPtr). If the platform supports it
1586/// but the compiler does not, atomic operations are implemented using inline
1587/// assembly.
1588// We can use #[repr(transparent)] here, but #[repr(C, align(N))]
1589// will show clearer docs.
1590#[cfg_attr(target_pointer_width = "16", repr(C, align(2)))]
1591#[cfg_attr(target_pointer_width = "32", repr(C, align(4)))]
1592#[cfg_attr(target_pointer_width = "64", repr(C, align(8)))]
1593#[cfg_attr(target_pointer_width = "128", repr(C, align(16)))]
1594pub struct AtomicPtr<T> {
1595    inner: imp::AtomicPtr<T>,
1596}
1597
1598impl<T> Default for AtomicPtr<T> {
1599    /// Creates a null `AtomicPtr<T>`.
1600    #[inline]
1601    fn default() -> Self {
1602        Self::new(ptr::null_mut())
1603    }
1604}
1605
1606impl<T> From<*mut T> for AtomicPtr<T> {
1607    #[inline]
1608    fn from(p: *mut T) -> Self {
1609        Self::new(p)
1610    }
1611}
1612
1613impl<T> fmt::Debug for AtomicPtr<T> {
1614    #[inline] // fmt is not hot path, but #[inline] on fmt seems to still be useful: https://github.com/rust-lang/rust/pull/117727
1615    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
1616        // std atomic types use Relaxed in Debug::fmt: https://github.com/rust-lang/rust/blob/1.84.0/library/core/src/sync/atomic.rs#L2188
1617        fmt::Debug::fmt(&self.load(Ordering::Relaxed), f)
1618    }
1619}
1620
1621impl<T> fmt::Pointer for AtomicPtr<T> {
1622    #[inline] // fmt is not hot path, but #[inline] on fmt seems to still be useful: https://github.com/rust-lang/rust/pull/117727
1623    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
1624        // std atomic types use Relaxed in Debug::fmt: https://github.com/rust-lang/rust/blob/1.84.0/library/core/src/sync/atomic.rs#L2188
1625        fmt::Pointer::fmt(&self.load(Ordering::Relaxed), f)
1626    }
1627}
1628
1629// UnwindSafe is implicitly implemented.
1630#[cfg(not(portable_atomic_no_core_unwind_safe))]
1631impl<T> core::panic::RefUnwindSafe for AtomicPtr<T> {}
1632#[cfg(all(portable_atomic_no_core_unwind_safe, feature = "std"))]
1633impl<T> std::panic::RefUnwindSafe for AtomicPtr<T> {}
1634
1635impl<T> AtomicPtr<T> {
1636    /// Creates a new `AtomicPtr`.
1637    ///
1638    /// # Examples
1639    ///
1640    /// ```
1641    /// use portable_atomic::AtomicPtr;
1642    ///
1643    /// let ptr = &mut 5;
1644    /// let atomic_ptr = AtomicPtr::new(ptr);
1645    /// ```
1646    #[inline]
1647    #[must_use]
1648    pub const fn new(p: *mut T) -> Self {
1649        static_assert_layout!(AtomicPtr<()>, *mut ());
1650        Self { inner: imp::AtomicPtr::new(p) }
1651    }
1652
1653    // TODO: update docs based on https://github.com/rust-lang/rust/pull/116762
1654    const_fn! {
1655        const_if: #[cfg(not(portable_atomic_no_const_mut_refs))];
1656        /// Creates a new `AtomicPtr` from a pointer.
1657        ///
1658        /// This is `const fn` on Rust 1.83+.
1659        ///
1660        /// # Safety
1661        ///
1662        /// * `ptr` must be aligned to `align_of::<AtomicPtr<T>>()` (note that on some platforms this
1663        ///   can be bigger than `align_of::<*mut T>()`).
1664        /// * `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`.
1665        /// * If this atomic type is [lock-free](Self::is_lock_free), non-atomic accesses to the value
1666        ///   behind `ptr` must have a happens-before relationship with atomic accesses via the returned
1667        ///   value (or vice-versa).
1668        ///   * In other words, time periods where the value is accessed atomically may not overlap
1669        ///     with periods where the value is accessed non-atomically.
1670        ///   * This requirement is trivially satisfied if `ptr` is never used non-atomically for the
1671        ///     duration of lifetime `'a`. Most use cases should be able to follow this guideline.
1672        ///   * This requirement is also trivially satisfied if all accesses (atomic or not) are done
1673        ///     from the same thread.
1674        /// * If this atomic type is *not* lock-free:
1675        ///   * Any accesses to the value behind `ptr` must have a happens-before relationship
1676        ///     with accesses via the returned value (or vice-versa).
1677        ///   * Any concurrent accesses to the value behind `ptr` for the duration of lifetime `'a` must
1678        ///     be compatible with operations performed by this atomic type.
1679        /// * This method must not be used to create overlapping or mixed-size atomic accesses, as
1680        ///   these are not supported by the memory model.
1681        ///
1682        /// [valid]: core::ptr#safety
1683        #[inline]
1684        #[must_use]
1685        pub const unsafe fn from_ptr<'a>(ptr: *mut *mut T) -> &'a Self {
1686            #[allow(clippy::cast_ptr_alignment)]
1687            // SAFETY: guaranteed by the caller
1688            unsafe { &*(ptr as *mut Self) }
1689        }
1690    }
1691
1692    /// Returns `true` if operations on values of this type are lock-free.
1693    ///
1694    /// If the compiler or the platform doesn't support the necessary
1695    /// atomic instructions, global locks for every potentially
1696    /// concurrent atomic operation will be used.
1697    ///
1698    /// # Examples
1699    ///
1700    /// ```
1701    /// use portable_atomic::AtomicPtr;
1702    ///
1703    /// let is_lock_free = AtomicPtr::<()>::is_lock_free();
1704    /// ```
1705    #[inline]
1706    #[must_use]
1707    pub fn is_lock_free() -> bool {
1708        <imp::AtomicPtr<T>>::is_lock_free()
1709    }
1710
1711    /// Returns `true` if operations on values of this type are lock-free.
1712    ///
1713    /// If the compiler or the platform doesn't support the necessary
1714    /// atomic instructions, global locks for every potentially
1715    /// concurrent atomic operation will be used.
1716    ///
1717    /// **Note:** If the atomic operation relies on dynamic CPU feature detection,
1718    /// this type may be lock-free even if the function returns false.
1719    ///
1720    /// # Examples
1721    ///
1722    /// ```
1723    /// use portable_atomic::AtomicPtr;
1724    ///
1725    /// const IS_ALWAYS_LOCK_FREE: bool = AtomicPtr::<()>::is_always_lock_free();
1726    /// ```
1727    #[inline]
1728    #[must_use]
1729    pub const fn is_always_lock_free() -> bool {
1730        <imp::AtomicPtr<T>>::IS_ALWAYS_LOCK_FREE
1731    }
1732    #[cfg(test)]
1733    const IS_ALWAYS_LOCK_FREE: bool = Self::is_always_lock_free();
1734
1735    const_fn! {
1736        const_if: #[cfg(not(portable_atomic_no_const_mut_refs))];
1737        /// Returns a mutable reference to the underlying pointer.
1738        ///
1739        /// This is safe because the mutable reference guarantees that no other threads are
1740        /// concurrently accessing the atomic data.
1741        ///
1742        /// This is `const fn` on Rust 1.83+.
1743        ///
1744        /// # Examples
1745        ///
1746        /// ```
1747        /// use portable_atomic::{AtomicPtr, Ordering};
1748        ///
1749        /// let mut data = 10;
1750        /// let mut atomic_ptr = AtomicPtr::new(&mut data);
1751        /// let mut other_data = 5;
1752        /// *atomic_ptr.get_mut() = &mut other_data;
1753        /// assert_eq!(unsafe { *atomic_ptr.load(Ordering::SeqCst) }, 5);
1754        /// ```
1755        #[inline]
1756        pub const fn get_mut(&mut self) -> &mut *mut T {
1757            // SAFETY: the mutable reference guarantees unique ownership.
1758            // (core::sync::atomic::Atomic*::get_mut is not const yet)
1759            unsafe { &mut *self.as_ptr() }
1760        }
1761    }
1762
1763    // TODO: Add from_mut/get_mut_slice/from_mut_slice once it is stable on std atomic types.
1764    // https://github.com/rust-lang/rust/issues/76314
1765
1766    const_fn! {
1767        const_if: #[cfg(not(portable_atomic_no_const_transmute))];
1768        /// Consumes the atomic and returns the contained value.
1769        ///
1770        /// This is safe because passing `self` by value guarantees that no other threads are
1771        /// concurrently accessing the atomic data.
1772        ///
1773        /// This is `const fn` on Rust 1.56+.
1774        ///
1775        /// # Examples
1776        ///
1777        /// ```
1778        /// use portable_atomic::AtomicPtr;
1779        ///
1780        /// let mut data = 5;
1781        /// let atomic_ptr = AtomicPtr::new(&mut data);
1782        /// assert_eq!(unsafe { *atomic_ptr.into_inner() }, 5);
1783        /// ```
1784        #[inline]
1785        pub const fn into_inner(self) -> *mut T {
1786            // SAFETY: AtomicPtr<T> and *mut T have the same size and in-memory representations,
1787            // so they can be safely transmuted.
1788            // (const UnsafeCell::into_inner is unstable)
1789            unsafe { core::mem::transmute(self) }
1790        }
1791    }
1792
1793    /// Loads a value from the pointer.
1794    ///
1795    /// `load` takes an [`Ordering`] argument which describes the memory ordering
1796    /// of this operation. Possible values are [`SeqCst`], [`Acquire`] and [`Relaxed`].
1797    ///
1798    /// # Panics
1799    ///
1800    /// Panics if `order` is [`Release`] or [`AcqRel`].
1801    ///
1802    /// # Examples
1803    ///
1804    /// ```
1805    /// use portable_atomic::{AtomicPtr, Ordering};
1806    ///
1807    /// let ptr = &mut 5;
1808    /// let some_ptr = AtomicPtr::new(ptr);
1809    ///
1810    /// let value = some_ptr.load(Ordering::Relaxed);
1811    /// ```
1812    #[inline]
1813    #[cfg_attr(
1814        any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
1815        track_caller
1816    )]
1817    pub fn load(&self, order: Ordering) -> *mut T {
1818        self.inner.load(order)
1819    }
1820
1821    /// Stores a value into the pointer.
1822    ///
1823    /// `store` takes an [`Ordering`] argument which describes the memory ordering
1824    /// of this operation. Possible values are [`SeqCst`], [`Release`] and [`Relaxed`].
1825    ///
1826    /// # Panics
1827    ///
1828    /// Panics if `order` is [`Acquire`] or [`AcqRel`].
1829    ///
1830    /// # Examples
1831    ///
1832    /// ```
1833    /// use portable_atomic::{AtomicPtr, Ordering};
1834    ///
1835    /// let ptr = &mut 5;
1836    /// let some_ptr = AtomicPtr::new(ptr);
1837    ///
1838    /// let other_ptr = &mut 10;
1839    ///
1840    /// some_ptr.store(other_ptr, Ordering::Relaxed);
1841    /// ```
1842    #[inline]
1843    #[cfg_attr(
1844        any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
1845        track_caller
1846    )]
1847    pub fn store(&self, ptr: *mut T, order: Ordering) {
1848        self.inner.store(ptr, order);
1849    }
1850
1851    cfg_has_atomic_cas_or_amo32! {
1852    /// Stores a value into the pointer, returning the previous value.
1853    ///
1854    /// `swap` takes an [`Ordering`] argument which describes the memory ordering
1855    /// of this operation. All ordering modes are possible. Note that using
1856    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1857    /// using [`Release`] makes the load part [`Relaxed`].
1858    ///
1859    /// # Examples
1860    ///
1861    /// ```
1862    /// use portable_atomic::{AtomicPtr, Ordering};
1863    ///
1864    /// let ptr = &mut 5;
1865    /// let some_ptr = AtomicPtr::new(ptr);
1866    ///
1867    /// let other_ptr = &mut 10;
1868    ///
1869    /// let value = some_ptr.swap(other_ptr, Ordering::Relaxed);
1870    /// ```
1871    #[inline]
1872    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1873    pub fn swap(&self, ptr: *mut T, order: Ordering) -> *mut T {
1874        self.inner.swap(ptr, order)
1875    }
1876
1877    cfg_has_atomic_cas! {
1878    /// Stores a value into the pointer if the current value is the same as the `current` value.
1879    ///
1880    /// The return value is a result indicating whether the new value was written and containing
1881    /// the previous value. On success this value is guaranteed to be equal to `current`.
1882    ///
1883    /// `compare_exchange` takes two [`Ordering`] arguments to describe the memory
1884    /// ordering of this operation. `success` describes the required ordering for the
1885    /// read-modify-write operation that takes place if the comparison with `current` succeeds.
1886    /// `failure` describes the required ordering for the load operation that takes place when
1887    /// the comparison fails. Using [`Acquire`] as success ordering makes the store part
1888    /// of this operation [`Relaxed`], and using [`Release`] makes the successful load
1889    /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
1890    ///
1891    /// # Panics
1892    ///
1893    /// Panics if `failure` is [`Release`], [`AcqRel`].
1894    ///
1895    /// # Examples
1896    ///
1897    /// ```
1898    /// use portable_atomic::{AtomicPtr, Ordering};
1899    ///
1900    /// let ptr = &mut 5;
1901    /// let some_ptr = AtomicPtr::new(ptr);
1902    ///
1903    /// let other_ptr = &mut 10;
1904    ///
1905    /// let value = some_ptr.compare_exchange(ptr, other_ptr, Ordering::SeqCst, Ordering::Relaxed);
1906    /// ```
1907    #[cfg_attr(docsrs, doc(alias = "compare_and_swap"))]
1908    #[inline]
1909    #[cfg_attr(
1910        any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
1911        track_caller
1912    )]
1913    pub fn compare_exchange(
1914        &self,
1915        current: *mut T,
1916        new: *mut T,
1917        success: Ordering,
1918        failure: Ordering,
1919    ) -> Result<*mut T, *mut T> {
1920        self.inner.compare_exchange(current, new, success, failure)
1921    }
1922
1923    /// Stores a value into the pointer if the current value is the same as the `current` value.
1924    ///
1925    /// Unlike [`AtomicPtr::compare_exchange`], this function is allowed to spuriously fail even when the
1926    /// comparison succeeds, which can result in more efficient code on some platforms. The
1927    /// return value is a result indicating whether the new value was written and containing the
1928    /// previous value.
1929    ///
1930    /// `compare_exchange_weak` takes two [`Ordering`] arguments to describe the memory
1931    /// ordering of this operation. `success` describes the required ordering for the
1932    /// read-modify-write operation that takes place if the comparison with `current` succeeds.
1933    /// `failure` describes the required ordering for the load operation that takes place when
1934    /// the comparison fails. Using [`Acquire`] as success ordering makes the store part
1935    /// of this operation [`Relaxed`], and using [`Release`] makes the successful load
1936    /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
1937    ///
1938    /// # Panics
1939    ///
1940    /// Panics if `failure` is [`Release`], [`AcqRel`].
1941    ///
1942    /// # Examples
1943    ///
1944    /// ```
1945    /// use portable_atomic::{AtomicPtr, Ordering};
1946    ///
1947    /// let some_ptr = AtomicPtr::new(&mut 5);
1948    ///
1949    /// let new = &mut 10;
1950    /// let mut old = some_ptr.load(Ordering::Relaxed);
1951    /// loop {
1952    ///     match some_ptr.compare_exchange_weak(old, new, Ordering::SeqCst, Ordering::Relaxed) {
1953    ///         Ok(_) => break,
1954    ///         Err(x) => old = x,
1955    ///     }
1956    /// }
1957    /// ```
1958    #[cfg_attr(docsrs, doc(alias = "compare_and_swap"))]
1959    #[inline]
1960    #[cfg_attr(
1961        any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
1962        track_caller
1963    )]
1964    pub fn compare_exchange_weak(
1965        &self,
1966        current: *mut T,
1967        new: *mut T,
1968        success: Ordering,
1969        failure: Ordering,
1970    ) -> Result<*mut T, *mut T> {
1971        self.inner.compare_exchange_weak(current, new, success, failure)
1972    }
1973
1974    /// Fetches the value, and applies a function to it that returns an optional
1975    /// new value. Returns a `Result` of `Ok(previous_value)` if the function
1976    /// returned `Some(_)`, else `Err(previous_value)`.
1977    ///
1978    /// Note: This may call the function multiple times if the value has been
1979    /// changed from other threads in the meantime, as long as the function
1980    /// returns `Some(_)`, but the function will have been applied only once to
1981    /// the stored value.
1982    ///
1983    /// `fetch_update` takes two [`Ordering`] arguments to describe the memory
1984    /// ordering of this operation. The first describes the required ordering for
1985    /// when the operation finally succeeds while the second describes the
1986    /// required ordering for loads. These correspond to the success and failure
1987    /// orderings of [`compare_exchange`](Self::compare_exchange) respectively.
1988    ///
1989    /// Using [`Acquire`] as success ordering makes the store part of this
1990    /// operation [`Relaxed`], and using [`Release`] makes the final successful
1991    /// load [`Relaxed`]. The (failed) load ordering can only be [`SeqCst`],
1992    /// [`Acquire`] or [`Relaxed`].
1993    ///
1994    /// # Panics
1995    ///
1996    /// Panics if `fetch_order` is [`Release`], [`AcqRel`].
1997    ///
1998    /// # Considerations
1999    ///
2000    /// This method is not magic; it is not provided by the hardware.
2001    /// It is implemented in terms of [`compare_exchange_weak`](Self::compare_exchange_weak),
2002    /// and suffers from the same drawbacks.
2003    /// In particular, this method will not circumvent the [ABA Problem].
2004    ///
2005    /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
2006    ///
2007    /// # Examples
2008    ///
2009    /// ```
2010    /// use portable_atomic::{AtomicPtr, Ordering};
2011    ///
2012    /// let ptr: *mut _ = &mut 5;
2013    /// let some_ptr = AtomicPtr::new(ptr);
2014    ///
2015    /// let new: *mut _ = &mut 10;
2016    /// assert_eq!(some_ptr.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(ptr));
2017    /// let result = some_ptr.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |x| {
2018    ///     if x == ptr {
2019    ///         Some(new)
2020    ///     } else {
2021    ///         None
2022    ///     }
2023    /// });
2024    /// assert_eq!(result, Ok(ptr));
2025    /// assert_eq!(some_ptr.load(Ordering::SeqCst), new);
2026    /// ```
2027    #[inline]
2028    #[cfg_attr(
2029        any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
2030        track_caller
2031    )]
2032    pub fn fetch_update<F>(
2033        &self,
2034        set_order: Ordering,
2035        fetch_order: Ordering,
2036        mut f: F,
2037    ) -> Result<*mut T, *mut T>
2038    where
2039        F: FnMut(*mut T) -> Option<*mut T>,
2040    {
2041        let mut prev = self.load(fetch_order);
2042        while let Some(next) = f(prev) {
2043            match self.compare_exchange_weak(prev, next, set_order, fetch_order) {
2044                x @ Ok(_) => return x,
2045                Err(next_prev) => prev = next_prev,
2046            }
2047        }
2048        Err(prev)
2049    }
2050
2051    #[cfg(miri)]
2052    #[inline]
2053    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2054    fn fetch_update_<F>(&self, order: Ordering, mut f: F) -> *mut T
2055    where
2056        F: FnMut(*mut T) -> *mut T,
2057    {
2058        // This is a private function and all instances of `f` only operate on the value
2059        // loaded, so there is no need to synchronize the first load/failed CAS.
2060        let mut prev = self.load(Ordering::Relaxed);
2061        loop {
2062            let next = f(prev);
2063            match self.compare_exchange_weak(prev, next, order, Ordering::Relaxed) {
2064                Ok(x) => return x,
2065                Err(next_prev) => prev = next_prev,
2066            }
2067        }
2068    }
2069    } // cfg_has_atomic_cas!
2070
2071    /// Offsets the pointer's address by adding `val` (in units of `T`),
2072    /// returning the previous pointer.
2073    ///
2074    /// This is equivalent to using [`wrapping_add`] to atomically perform the
2075    /// equivalent of `ptr = ptr.wrapping_add(val);`.
2076    ///
2077    /// This method operates in units of `T`, which means that it cannot be used
2078    /// to offset the pointer by an amount which is not a multiple of
2079    /// `size_of::<T>()`. This can sometimes be inconvenient, as you may want to
2080    /// work with a deliberately misaligned pointer. In such cases, you may use
2081    /// the [`fetch_byte_add`](Self::fetch_byte_add) method instead.
2082    ///
2083    /// `fetch_ptr_add` takes an [`Ordering`] argument which describes the
2084    /// memory ordering of this operation. All ordering modes are possible. Note
2085    /// that using [`Acquire`] makes the store part of this operation
2086    /// [`Relaxed`], and using [`Release`] makes the load part [`Relaxed`].
2087    ///
2088    /// [`wrapping_add`]: https://doc.rust-lang.org/std/primitive.pointer.html#method.wrapping_add
2089    ///
2090    /// # Examples
2091    ///
2092    /// ```
2093    /// # #![allow(unstable_name_collisions)]
2094    /// # #[allow(unused_imports)] use sptr::Strict as _; // strict provenance polyfill for old rustc
2095    /// use portable_atomic::{AtomicPtr, Ordering};
2096    ///
2097    /// let atom = AtomicPtr::<i64>::new(core::ptr::null_mut());
2098    /// assert_eq!(atom.fetch_ptr_add(1, Ordering::Relaxed).addr(), 0);
2099    /// // Note: units of `size_of::<i64>()`.
2100    /// assert_eq!(atom.load(Ordering::Relaxed).addr(), 8);
2101    /// ```
2102    #[inline]
2103    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2104    pub fn fetch_ptr_add(&self, val: usize, order: Ordering) -> *mut T {
2105        self.fetch_byte_add(val.wrapping_mul(core::mem::size_of::<T>()), order)
2106    }
2107
2108    /// Offsets the pointer's address by subtracting `val` (in units of `T`),
2109    /// returning the previous pointer.
2110    ///
2111    /// This is equivalent to using [`wrapping_sub`] to atomically perform the
2112    /// equivalent of `ptr = ptr.wrapping_sub(val);`.
2113    ///
2114    /// This method operates in units of `T`, which means that it cannot be used
2115    /// to offset the pointer by an amount which is not a multiple of
2116    /// `size_of::<T>()`. This can sometimes be inconvenient, as you may want to
2117    /// work with a deliberately misaligned pointer. In such cases, you may use
2118    /// the [`fetch_byte_sub`](Self::fetch_byte_sub) method instead.
2119    ///
2120    /// `fetch_ptr_sub` takes an [`Ordering`] argument which describes the memory
2121    /// ordering of this operation. All ordering modes are possible. Note that
2122    /// using [`Acquire`] makes the store part of this operation [`Relaxed`],
2123    /// and using [`Release`] makes the load part [`Relaxed`].
2124    ///
2125    /// [`wrapping_sub`]: https://doc.rust-lang.org/std/primitive.pointer.html#method.wrapping_sub
2126    ///
2127    /// # Examples
2128    ///
2129    /// ```
2130    /// use portable_atomic::{AtomicPtr, Ordering};
2131    ///
2132    /// let array = [1i32, 2i32];
2133    /// let atom = AtomicPtr::new(array.as_ptr().wrapping_add(1) as *mut _);
2134    ///
2135    /// assert!(core::ptr::eq(atom.fetch_ptr_sub(1, Ordering::Relaxed), &array[1]));
2136    /// assert!(core::ptr::eq(atom.load(Ordering::Relaxed), &array[0]));
2137    /// ```
2138    #[inline]
2139    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2140    pub fn fetch_ptr_sub(&self, val: usize, order: Ordering) -> *mut T {
2141        self.fetch_byte_sub(val.wrapping_mul(core::mem::size_of::<T>()), order)
2142    }
2143
2144    /// Offsets the pointer's address by adding `val` *bytes*, returning the
2145    /// previous pointer.
2146    ///
2147    /// This is equivalent to using [`wrapping_add`] and [`cast`] to atomically
2148    /// perform `ptr = ptr.cast::<u8>().wrapping_add(val).cast::<T>()`.
2149    ///
2150    /// `fetch_byte_add` takes an [`Ordering`] argument which describes the
2151    /// memory ordering of this operation. All ordering modes are possible. Note
2152    /// that using [`Acquire`] makes the store part of this operation
2153    /// [`Relaxed`], and using [`Release`] makes the load part [`Relaxed`].
2154    ///
2155    /// [`wrapping_add`]: https://doc.rust-lang.org/std/primitive.pointer.html#method.wrapping_add
2156    /// [`cast`]: https://doc.rust-lang.org/std/primitive.pointer.html#method.cast
2157    ///
2158    /// # Examples
2159    ///
2160    /// ```
2161    /// # #![allow(unstable_name_collisions)]
2162    /// # #[allow(unused_imports)] use sptr::Strict as _; // strict provenance polyfill for old rustc
2163    /// use portable_atomic::{AtomicPtr, Ordering};
2164    ///
2165    /// let atom = AtomicPtr::<i64>::new(core::ptr::null_mut());
2166    /// assert_eq!(atom.fetch_byte_add(1, Ordering::Relaxed).addr(), 0);
2167    /// // Note: in units of bytes, not `size_of::<i64>()`.
2168    /// assert_eq!(atom.load(Ordering::Relaxed).addr(), 1);
2169    /// ```
2170    #[inline]
2171    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2172    pub fn fetch_byte_add(&self, val: usize, order: Ordering) -> *mut T {
2173        // Ideally, we would always use AtomicPtr::fetch_* since it is strict-provenance
2174        // compatible, but it is unstable. So, for now emulate it only on cfg(miri).
2175        // Code using AtomicUsize::fetch_* via casts is still permissive-provenance
2176        // compatible and is sound.
2177        // TODO: Once `#![feature(strict_provenance_atomic_ptr)]` is stabilized,
2178        // use AtomicPtr::fetch_* in all cases from the version in which it is stabilized.
2179        #[cfg(miri)]
2180        {
2181            self.fetch_update_(order, |x| x.with_addr(x.addr().wrapping_add(val)))
2182        }
2183        #[cfg(not(miri))]
2184        {
2185            crate::utils::ptr::with_exposed_provenance_mut(
2186                self.as_atomic_usize().fetch_add(val, order)
2187            )
2188        }
2189    }
2190
2191    /// Offsets the pointer's address by subtracting `val` *bytes*, returning the
2192    /// previous pointer.
2193    ///
2194    /// This is equivalent to using [`wrapping_sub`] and [`cast`] to atomically
2195    /// perform `ptr = ptr.cast::<u8>().wrapping_sub(val).cast::<T>()`.
2196    ///
2197    /// `fetch_byte_sub` takes an [`Ordering`] argument which describes the
2198    /// memory ordering of this operation. All ordering modes are possible. Note
2199    /// that using [`Acquire`] makes the store part of this operation
2200    /// [`Relaxed`], and using [`Release`] makes the load part [`Relaxed`].
2201    ///
2202    /// [`wrapping_sub`]: https://doc.rust-lang.org/std/primitive.pointer.html#method.wrapping_sub
2203    /// [`cast`]: https://doc.rust-lang.org/std/primitive.pointer.html#method.cast
2204    ///
2205    /// # Examples
2206    ///
2207    /// ```
2208    /// # #![allow(unstable_name_collisions)]
2209    /// # #[allow(unused_imports)] use sptr::Strict as _; // strict provenance polyfill for old rustc
2210    /// use portable_atomic::{AtomicPtr, Ordering};
2211    ///
2212    /// let atom = AtomicPtr::<i64>::new(sptr::invalid_mut(1));
2213    /// assert_eq!(atom.fetch_byte_sub(1, Ordering::Relaxed).addr(), 1);
2214    /// assert_eq!(atom.load(Ordering::Relaxed).addr(), 0);
2215    /// ```
2216    #[inline]
2217    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2218    pub fn fetch_byte_sub(&self, val: usize, order: Ordering) -> *mut T {
2219        // Ideally, we would always use AtomicPtr::fetch_* since it is strict-provenance
2220        // compatible, but it is unstable. So, for now emulate it only on cfg(miri).
2221        // Code using AtomicUsize::fetch_* via casts is still permissive-provenance
2222        // compatible and is sound.
2223        // TODO: Once `#![feature(strict_provenance_atomic_ptr)]` is stabilized,
2224        // use AtomicPtr::fetch_* in all cases from the version in which it is stabilized.
2225        #[cfg(miri)]
2226        {
2227            self.fetch_update_(order, |x| x.with_addr(x.addr().wrapping_sub(val)))
2228        }
2229        #[cfg(not(miri))]
2230        {
2231            crate::utils::ptr::with_exposed_provenance_mut(
2232                self.as_atomic_usize().fetch_sub(val, order)
2233            )
2234        }
2235    }
2236
2237    /// Performs a bitwise "or" operation on the address of the current pointer,
2238    /// and the argument `val`, and stores a pointer with provenance of the
2239    /// current pointer and the resulting address.
2240    ///
2241    /// This is equivalent to using [`map_addr`] to atomically perform
2242    /// `ptr = ptr.map_addr(|a| a | val)`. This can be used in tagged
2243    /// pointer schemes to atomically set tag bits.
2244    ///
2245    /// **Caveat**: This operation returns the previous value. To compute the
2246    /// stored value without losing provenance, you may use [`map_addr`]. For
2247    /// example: `a.fetch_or(val).map_addr(|a| a | val)`.
2248    ///
2249    /// `fetch_or` takes an [`Ordering`] argument which describes the memory
2250    /// ordering of this operation. All ordering modes are possible. Note that
2251    /// using [`Acquire`] makes the store part of this operation [`Relaxed`],
2252    /// and using [`Release`] makes the load part [`Relaxed`].
2253    ///
2254    /// This API and its claimed semantics are part of the Strict Provenance
2255    /// experiment, see the [module documentation for `ptr`][core::ptr] for
2256    /// details.
2257    ///
2258    /// [`map_addr`]: https://doc.rust-lang.org/std/primitive.pointer.html#method.map_addr
2259    ///
2260    /// # Examples
2261    ///
2262    /// ```
2263    /// # #![allow(unstable_name_collisions)]
2264    /// # #[allow(unused_imports)] use sptr::Strict as _; // strict provenance polyfill for old rustc
2265    /// use portable_atomic::{AtomicPtr, Ordering};
2266    ///
2267    /// let pointer = &mut 3i64 as *mut i64;
2268    ///
2269    /// let atom = AtomicPtr::<i64>::new(pointer);
2270    /// // Tag the bottom bit of the pointer.
2271    /// assert_eq!(atom.fetch_or(1, Ordering::Relaxed).addr() & 1, 0);
2272    /// // Extract and untag.
2273    /// let tagged = atom.load(Ordering::Relaxed);
2274    /// assert_eq!(tagged.addr() & 1, 1);
2275    /// assert_eq!(tagged.map_addr(|p| p & !1), pointer);
2276    /// ```
2277    #[inline]
2278    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2279    pub fn fetch_or(&self, val: usize, order: Ordering) -> *mut T {
2280        // Ideally, we would always use AtomicPtr::fetch_* since it is strict-provenance
2281        // compatible, but it is unstable. So, for now emulate it only on cfg(miri).
2282        // Code using AtomicUsize::fetch_* via casts is still permissive-provenance
2283        // compatible and is sound.
2284        // TODO: Once `#![feature(strict_provenance_atomic_ptr)]` is stabilized,
2285        // use AtomicPtr::fetch_* in all cases from the version in which it is stabilized.
2286        #[cfg(miri)]
2287        {
2288            self.fetch_update_(order, |x| x.with_addr(x.addr() | val))
2289        }
2290        #[cfg(not(miri))]
2291        {
2292            crate::utils::ptr::with_exposed_provenance_mut(
2293                self.as_atomic_usize().fetch_or(val, order)
2294            )
2295        }
2296    }
2297
2298    /// Performs a bitwise "and" operation on the address of the current
2299    /// pointer, and the argument `val`, and stores a pointer with provenance of
2300    /// the current pointer and the resulting address.
2301    ///
2302    /// This is equivalent to using [`map_addr`] to atomically perform
2303    /// `ptr = ptr.map_addr(|a| a & val)`. This can be used in tagged
2304    /// pointer schemes to atomically unset tag bits.
2305    ///
2306    /// **Caveat**: This operation returns the previous value. To compute the
2307    /// stored value without losing provenance, you may use [`map_addr`]. For
2308    /// example: `a.fetch_and(val).map_addr(|a| a & val)`.
2309    ///
2310    /// `fetch_and` takes an [`Ordering`] argument which describes the memory
2311    /// ordering of this operation. All ordering modes are possible. Note that
2312    /// using [`Acquire`] makes the store part of this operation [`Relaxed`],
2313    /// and using [`Release`] makes the load part [`Relaxed`].
2314    ///
2315    /// This API and its claimed semantics are part of the Strict Provenance
2316    /// experiment, see the [module documentation for `ptr`][core::ptr] for
2317    /// details.
2318    ///
2319    /// [`map_addr`]: https://doc.rust-lang.org/std/primitive.pointer.html#method.map_addr
2320    ///
2321    /// # Examples
2322    ///
2323    /// ```
2324    /// # #![allow(unstable_name_collisions)]
2325    /// # #[allow(unused_imports)] use sptr::Strict as _; // strict provenance polyfill for old rustc
2326    /// use portable_atomic::{AtomicPtr, Ordering};
2327    ///
2328    /// let pointer = &mut 3i64 as *mut i64;
2329    /// // A tagged pointer
2330    /// let atom = AtomicPtr::<i64>::new(pointer.map_addr(|a| a | 1));
2331    /// assert_eq!(atom.fetch_or(1, Ordering::Relaxed).addr() & 1, 1);
2332    /// // Untag, and extract the previously tagged pointer.
2333    /// let untagged = atom.fetch_and(!1, Ordering::Relaxed).map_addr(|a| a & !1);
2334    /// assert_eq!(untagged, pointer);
2335    /// ```
2336    #[inline]
2337    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2338    pub fn fetch_and(&self, val: usize, order: Ordering) -> *mut T {
2339        // Ideally, we would always use AtomicPtr::fetch_* since it is strict-provenance
2340        // compatible, but it is unstable. So, for now emulate it only on cfg(miri).
2341        // Code using AtomicUsize::fetch_* via casts is still permissive-provenance
2342        // compatible and is sound.
2343        // TODO: Once `#![feature(strict_provenance_atomic_ptr)]` is stabilized,
2344        // use AtomicPtr::fetch_* in all cases from the version in which it is stabilized.
2345        #[cfg(miri)]
2346        {
2347            self.fetch_update_(order, |x| x.with_addr(x.addr() & val))
2348        }
2349        #[cfg(not(miri))]
2350        {
2351            crate::utils::ptr::with_exposed_provenance_mut(
2352                self.as_atomic_usize().fetch_and(val, order)
2353            )
2354        }
2355    }
2356
2357    /// Performs a bitwise "xor" operation on the address of the current
2358    /// pointer, and the argument `val`, and stores a pointer with provenance of
2359    /// the current pointer and the resulting address.
2360    ///
2361    /// This is equivalent to using [`map_addr`] to atomically perform
2362    /// `ptr = ptr.map_addr(|a| a ^ val)`. This can be used in tagged
2363    /// pointer schemes to atomically toggle tag bits.
2364    ///
2365    /// **Caveat**: This operation returns the previous value. To compute the
2366    /// stored value without losing provenance, you may use [`map_addr`]. For
2367    /// example: `a.fetch_xor(val).map_addr(|a| a ^ val)`.
2368    ///
2369    /// `fetch_xor` takes an [`Ordering`] argument which describes the memory
2370    /// ordering of this operation. All ordering modes are possible. Note that
2371    /// using [`Acquire`] makes the store part of this operation [`Relaxed`],
2372    /// and using [`Release`] makes the load part [`Relaxed`].
2373    ///
2374    /// This API and its claimed semantics are part of the Strict Provenance
2375    /// experiment, see the [module documentation for `ptr`][core::ptr] for
2376    /// details.
2377    ///
2378    /// [`map_addr`]: https://doc.rust-lang.org/std/primitive.pointer.html#method.map_addr
2379    ///
2380    /// # Examples
2381    ///
2382    /// ```
2383    /// # #![allow(unstable_name_collisions)]
2384    /// # #[allow(unused_imports)] use sptr::Strict as _; // strict provenance polyfill for old rustc
2385    /// use portable_atomic::{AtomicPtr, Ordering};
2386    ///
2387    /// let pointer = &mut 3i64 as *mut i64;
2388    /// let atom = AtomicPtr::<i64>::new(pointer);
2389    ///
2390    /// // Toggle a tag bit on the pointer.
2391    /// atom.fetch_xor(1, Ordering::Relaxed);
2392    /// assert_eq!(atom.load(Ordering::Relaxed).addr() & 1, 1);
2393    /// ```
2394    #[inline]
2395    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2396    pub fn fetch_xor(&self, val: usize, order: Ordering) -> *mut T {
2397        // Ideally, we would always use AtomicPtr::fetch_* since it is strict-provenance
2398        // compatible, but it is unstable. So, for now emulate it only on cfg(miri).
2399        // Code using AtomicUsize::fetch_* via casts is still permissive-provenance
2400        // compatible and is sound.
2401        // TODO: Once `#![feature(strict_provenance_atomic_ptr)]` is stabilized,
2402        // use AtomicPtr::fetch_* in all cases from the version in which it is stabilized.
2403        #[cfg(miri)]
2404        {
2405            self.fetch_update_(order, |x| x.with_addr(x.addr() ^ val))
2406        }
2407        #[cfg(not(miri))]
2408        {
2409            crate::utils::ptr::with_exposed_provenance_mut(
2410                self.as_atomic_usize().fetch_xor(val, order)
2411            )
2412        }
2413    }
2414
2415    /// Sets the bit at the specified bit-position to 1.
2416    ///
2417    /// Returns `true` if the specified bit was previously set to 1.
2418    ///
2419    /// `bit_set` takes an [`Ordering`] argument which describes the memory ordering
2420    /// of this operation. All ordering modes are possible. Note that using
2421    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
2422    /// using [`Release`] makes the load part [`Relaxed`].
2423    ///
2424    /// This corresponds to x86's `lock bts`, and the implementation calls them on x86/x86_64.
2425    ///
2426    /// # Examples
2427    ///
2428    /// ```
2429    /// # #![allow(unstable_name_collisions)]
2430    /// # #[allow(unused_imports)] use sptr::Strict as _; // strict provenance polyfill for old rustc
2431    /// use portable_atomic::{AtomicPtr, Ordering};
2432    ///
2433    /// let pointer = &mut 3i64 as *mut i64;
2434    ///
2435    /// let atom = AtomicPtr::<i64>::new(pointer);
2436    /// // Tag the bottom bit of the pointer.
2437    /// assert!(!atom.bit_set(0, Ordering::Relaxed));
2438    /// // Extract and untag.
2439    /// let tagged = atom.load(Ordering::Relaxed);
2440    /// assert_eq!(tagged.addr() & 1, 1);
2441    /// assert_eq!(tagged.map_addr(|p| p & !1), pointer);
2442    /// ```
2443    #[inline]
2444    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2445    pub fn bit_set(&self, bit: u32, order: Ordering) -> bool {
2446        // Ideally, we would always use AtomicPtr::fetch_* since it is strict-provenance
2447        // compatible, but it is unstable. So, for now emulate it only on cfg(miri).
2448        // Code using AtomicUsize::fetch_* via casts is still permissive-provenance
2449        // compatible and is sound.
2450        // TODO: Once `#![feature(strict_provenance_atomic_ptr)]` is stabilized,
2451        // use AtomicPtr::fetch_* in all cases from the version in which it is stabilized.
2452        #[cfg(miri)]
2453        {
2454            let mask = 1_usize.wrapping_shl(bit);
2455            self.fetch_or(mask, order).addr() & mask != 0
2456        }
2457        #[cfg(not(miri))]
2458        {
2459            self.as_atomic_usize().bit_set(bit, order)
2460        }
2461    }
2462
2463    /// Clears the bit at the specified bit-position to 1.
2464    ///
2465    /// Returns `true` if the specified bit was previously set to 1.
2466    ///
2467    /// `bit_clear` takes an [`Ordering`] argument which describes the memory ordering
2468    /// of this operation. All ordering modes are possible. Note that using
2469    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
2470    /// using [`Release`] makes the load part [`Relaxed`].
2471    ///
2472    /// This corresponds to x86's `lock btr`, and the implementation calls them on x86/x86_64.
2473    ///
2474    /// # Examples
2475    ///
2476    /// ```
2477    /// # #![allow(unstable_name_collisions)]
2478    /// # #[allow(unused_imports)] use sptr::Strict as _; // strict provenance polyfill for old rustc
2479    /// use portable_atomic::{AtomicPtr, Ordering};
2480    ///
2481    /// let pointer = &mut 3i64 as *mut i64;
2482    /// // A tagged pointer
2483    /// let atom = AtomicPtr::<i64>::new(pointer.map_addr(|a| a | 1));
2484    /// assert!(atom.bit_set(0, Ordering::Relaxed));
2485    /// // Untag
2486    /// assert!(atom.bit_clear(0, Ordering::Relaxed));
2487    /// ```
2488    #[inline]
2489    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2490    pub fn bit_clear(&self, bit: u32, order: Ordering) -> bool {
2491        // Ideally, we would always use AtomicPtr::fetch_* since it is strict-provenance
2492        // compatible, but it is unstable. So, for now emulate it only on cfg(miri).
2493        // Code using AtomicUsize::fetch_* via casts is still permissive-provenance
2494        // compatible and is sound.
2495        // TODO: Once `#![feature(strict_provenance_atomic_ptr)]` is stabilized,
2496        // use AtomicPtr::fetch_* in all cases from the version in which it is stabilized.
2497        #[cfg(miri)]
2498        {
2499            let mask = 1_usize.wrapping_shl(bit);
2500            self.fetch_and(!mask, order).addr() & mask != 0
2501        }
2502        #[cfg(not(miri))]
2503        {
2504            self.as_atomic_usize().bit_clear(bit, order)
2505        }
2506    }
2507
2508    /// Toggles the bit at the specified bit-position.
2509    ///
2510    /// Returns `true` if the specified bit was previously set to 1.
2511    ///
2512    /// `bit_toggle` takes an [`Ordering`] argument which describes the memory ordering
2513    /// of this operation. All ordering modes are possible. Note that using
2514    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
2515    /// using [`Release`] makes the load part [`Relaxed`].
2516    ///
2517    /// This corresponds to x86's `lock btc`, and the implementation calls them on x86/x86_64.
2518    ///
2519    /// # Examples
2520    ///
2521    /// ```
2522    /// # #![allow(unstable_name_collisions)]
2523    /// # #[allow(unused_imports)] use sptr::Strict as _; // strict provenance polyfill for old rustc
2524    /// use portable_atomic::{AtomicPtr, Ordering};
2525    ///
2526    /// let pointer = &mut 3i64 as *mut i64;
2527    /// let atom = AtomicPtr::<i64>::new(pointer);
2528    ///
2529    /// // Toggle a tag bit on the pointer.
2530    /// atom.bit_toggle(0, Ordering::Relaxed);
2531    /// assert_eq!(atom.load(Ordering::Relaxed).addr() & 1, 1);
2532    /// ```
2533    #[inline]
2534    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2535    pub fn bit_toggle(&self, bit: u32, order: Ordering) -> bool {
2536        // Ideally, we would always use AtomicPtr::fetch_* since it is strict-provenance
2537        // compatible, but it is unstable. So, for now emulate it only on cfg(miri).
2538        // Code using AtomicUsize::fetch_* via casts is still permissive-provenance
2539        // compatible and is sound.
2540        // TODO: Once `#![feature(strict_provenance_atomic_ptr)]` is stabilized,
2541        // use AtomicPtr::fetch_* in all cases from the version in which it is stabilized.
2542        #[cfg(miri)]
2543        {
2544            let mask = 1_usize.wrapping_shl(bit);
2545            self.fetch_xor(mask, order).addr() & mask != 0
2546        }
2547        #[cfg(not(miri))]
2548        {
2549            self.as_atomic_usize().bit_toggle(bit, order)
2550        }
2551    }
2552
2553    #[cfg(not(miri))]
2554    #[inline(always)]
2555    fn as_atomic_usize(&self) -> &AtomicUsize {
2556        static_assert!(
2557            core::mem::size_of::<AtomicPtr<()>>() == core::mem::size_of::<AtomicUsize>()
2558        );
2559        static_assert!(
2560            core::mem::align_of::<AtomicPtr<()>>() == core::mem::align_of::<AtomicUsize>()
2561        );
2562        // SAFETY: AtomicPtr and AtomicUsize have the same layout,
2563        // and both access data in the same way.
2564        unsafe { &*(self as *const Self as *const AtomicUsize) }
2565    }
2566    } // cfg_has_atomic_cas_or_amo32!
2567
2568    const_fn! {
2569        const_if: #[cfg(not(portable_atomic_no_const_raw_ptr_deref))];
2570        /// Returns a mutable pointer to the underlying pointer.
2571        ///
2572        /// Returning an `*mut` pointer from a shared reference to this atomic is
2573        /// safe because the atomic types work with interior mutability. Any use of
2574        /// the returned raw pointer requires an `unsafe` block and has to uphold
2575        /// the safety requirements. If there is concurrent access, note the following
2576        /// additional safety requirements:
2577        ///
2578        /// - If this atomic type is [lock-free](Self::is_lock_free), any concurrent
2579        ///   operations on it must be atomic.
2580        /// - Otherwise, any concurrent operations on it must be compatible with
2581        ///   operations performed by this atomic type.
2582        ///
2583        /// This is `const fn` on Rust 1.58+.
2584        #[inline]
2585        pub const fn as_ptr(&self) -> *mut *mut T {
2586            self.inner.as_ptr()
2587        }
2588    }
2589}
2590// See https://github.com/taiki-e/portable-atomic/issues/180
2591#[cfg(not(feature = "require-cas"))]
2592cfg_no_atomic_cas! {
2593#[doc(hidden)]
2594#[allow(unused_variables, clippy::unused_self, clippy::extra_unused_lifetimes)]
2595impl<'a, T: 'a> AtomicPtr<T> {
2596    cfg_no_atomic_cas_or_amo32! {
2597    #[inline]
2598    pub fn swap(&self, ptr: *mut T, order: Ordering) -> *mut T
2599    where
2600        &'a Self: HasSwap,
2601    {
2602        unimplemented!()
2603    }
2604    } // cfg_no_atomic_cas_or_amo32!
2605    #[inline]
2606    pub fn compare_exchange(
2607        &self,
2608        current: *mut T,
2609        new: *mut T,
2610        success: Ordering,
2611        failure: Ordering,
2612    ) -> Result<*mut T, *mut T>
2613    where
2614        &'a Self: HasCompareExchange,
2615    {
2616        unimplemented!()
2617    }
2618    #[inline]
2619    pub fn compare_exchange_weak(
2620        &self,
2621        current: *mut T,
2622        new: *mut T,
2623        success: Ordering,
2624        failure: Ordering,
2625    ) -> Result<*mut T, *mut T>
2626    where
2627        &'a Self: HasCompareExchangeWeak,
2628    {
2629        unimplemented!()
2630    }
2631    #[inline]
2632    pub fn fetch_update<F>(
2633        &self,
2634        set_order: Ordering,
2635        fetch_order: Ordering,
2636        f: F,
2637    ) -> Result<*mut T, *mut T>
2638    where
2639        F: FnMut(*mut T) -> Option<*mut T>,
2640        &'a Self: HasFetchUpdate,
2641    {
2642        unimplemented!()
2643    }
2644    cfg_no_atomic_cas_or_amo32! {
2645    #[inline]
2646    pub fn fetch_ptr_add(&self, val: usize, order: Ordering) -> *mut T
2647    where
2648        &'a Self: HasFetchPtrAdd,
2649    {
2650        unimplemented!()
2651    }
2652    #[inline]
2653    pub fn fetch_ptr_sub(&self, val: usize, order: Ordering) -> *mut T
2654    where
2655        &'a Self: HasFetchPtrSub,
2656    {
2657        unimplemented!()
2658    }
2659    #[inline]
2660    pub fn fetch_byte_add(&self, val: usize, order: Ordering) -> *mut T
2661    where
2662        &'a Self: HasFetchByteAdd,
2663    {
2664        unimplemented!()
2665    }
2666    #[inline]
2667    pub fn fetch_byte_sub(&self, val: usize, order: Ordering) -> *mut T
2668    where
2669        &'a Self: HasFetchByteSub,
2670    {
2671        unimplemented!()
2672    }
2673    #[inline]
2674    pub fn fetch_or(&self, val: usize, order: Ordering) -> *mut T
2675    where
2676        &'a Self: HasFetchOr,
2677    {
2678        unimplemented!()
2679    }
2680    #[inline]
2681    pub fn fetch_and(&self, val: usize, order: Ordering) -> *mut T
2682    where
2683        &'a Self: HasFetchAnd,
2684    {
2685        unimplemented!()
2686    }
2687    #[inline]
2688    pub fn fetch_xor(&self, val: usize, order: Ordering) -> *mut T
2689    where
2690        &'a Self: HasFetchXor,
2691    {
2692        unimplemented!()
2693    }
2694    #[inline]
2695    pub fn bit_set(&self, bit: u32, order: Ordering) -> bool
2696    where
2697        &'a Self: HasBitSet,
2698    {
2699        unimplemented!()
2700    }
2701    #[inline]
2702    pub fn bit_clear(&self, bit: u32, order: Ordering) -> bool
2703    where
2704        &'a Self: HasBitClear,
2705    {
2706        unimplemented!()
2707    }
2708    #[inline]
2709    pub fn bit_toggle(&self, bit: u32, order: Ordering) -> bool
2710    where
2711        &'a Self: HasBitToggle,
2712    {
2713        unimplemented!()
2714    }
2715    } // cfg_no_atomic_cas_or_amo32!
2716}
2717} // cfg_no_atomic_cas!
2718} // cfg_has_atomic_ptr!
2719
2720macro_rules! atomic_int {
2721    // Atomic{I,U}* impls
2722    ($atomic_type:ident, $int_type:ident, $align:literal,
2723        $cfg_has_atomic_cas_or_amo32_or_8:ident, $cfg_no_atomic_cas_or_amo32_or_8:ident
2724        $(, #[$cfg_float:meta] $atomic_float_type:ident, $float_type:ident)?
2725    ) => {
2726        doc_comment! {
2727            concat!("An integer type which can be safely shared between threads.
2728
2729This type has the same in-memory representation as the underlying integer type,
2730[`", stringify!($int_type), "`].
2731
2732If the compiler and the platform support atomic loads and stores of [`", stringify!($int_type),
2733"`], this type is a wrapper for the standard library's `", stringify!($atomic_type),
2734"`. If the platform supports it but the compiler does not, atomic operations are implemented using
2735inline assembly. Otherwise synchronizes using global locks.
2736You can call [`", stringify!($atomic_type), "::is_lock_free()`] to check whether
2737atomic instructions or locks will be used.
2738"
2739            ),
2740            // We can use #[repr(transparent)] here, but #[repr(C, align(N))]
2741            // will show clearer docs.
2742            #[repr(C, align($align))]
2743            pub struct $atomic_type {
2744                inner: imp::$atomic_type,
2745            }
2746        }
2747
2748        impl Default for $atomic_type {
2749            #[inline]
2750            fn default() -> Self {
2751                Self::new($int_type::default())
2752            }
2753        }
2754
2755        impl From<$int_type> for $atomic_type {
2756            #[inline]
2757            fn from(v: $int_type) -> Self {
2758                Self::new(v)
2759            }
2760        }
2761
2762        // UnwindSafe is implicitly implemented.
2763        #[cfg(not(portable_atomic_no_core_unwind_safe))]
2764        impl core::panic::RefUnwindSafe for $atomic_type {}
2765        #[cfg(all(portable_atomic_no_core_unwind_safe, feature = "std"))]
2766        impl std::panic::RefUnwindSafe for $atomic_type {}
2767
2768        impl_debug_and_serde!($atomic_type);
2769
2770        impl $atomic_type {
2771            doc_comment! {
2772                concat!(
2773                    "Creates a new atomic integer.
2774
2775# Examples
2776
2777```
2778use portable_atomic::", stringify!($atomic_type), ";
2779
2780let atomic_forty_two = ", stringify!($atomic_type), "::new(42);
2781```"
2782                ),
2783                #[inline]
2784                #[must_use]
2785                pub const fn new(v: $int_type) -> Self {
2786                    static_assert_layout!($atomic_type, $int_type);
2787                    Self { inner: imp::$atomic_type::new(v) }
2788                }
2789            }
2790
2791            // TODO: update docs based on https://github.com/rust-lang/rust/pull/116762
2792            #[cfg(not(portable_atomic_no_const_mut_refs))]
2793            doc_comment! {
2794                concat!("Creates a new reference to an atomic integer from a pointer.
2795
2796This is `const fn` on Rust 1.83+.
2797
2798# Safety
2799
2800* `ptr` must be aligned to `align_of::<", stringify!($atomic_type), ">()` (note that on some platforms this
2801  can be bigger than `align_of::<", stringify!($int_type), ">()`).
2802* `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`.
2803* If this atomic type is [lock-free](Self::is_lock_free), non-atomic accesses to the value
2804  behind `ptr` must have a happens-before relationship with atomic accesses via
2805  the returned value (or vice-versa).
2806  * In other words, time periods where the value is accessed atomically may not
2807    overlap with periods where the value is accessed non-atomically.
2808  * This requirement is trivially satisfied if `ptr` is never used non-atomically
2809    for the duration of lifetime `'a`. Most use cases should be able to follow
2810    this guideline.
2811  * This requirement is also trivially satisfied if all accesses (atomic or not) are
2812    done from the same thread.
2813* If this atomic type is *not* lock-free:
2814  * Any accesses to the value behind `ptr` must have a happens-before relationship
2815    with accesses via the returned value (or vice-versa).
2816  * Any concurrent accesses to the value behind `ptr` for the duration of lifetime `'a` must
2817    be compatible with operations performed by this atomic type.
2818* This method must not be used to create overlapping or mixed-size atomic
2819  accesses, as these are not supported by the memory model.
2820
2821[valid]: core::ptr#safety"),
2822                #[inline]
2823                #[must_use]
2824                pub const unsafe fn from_ptr<'a>(ptr: *mut $int_type) -> &'a Self {
2825                    #[allow(clippy::cast_ptr_alignment)]
2826                    // SAFETY: guaranteed by the caller
2827                    unsafe { &*(ptr as *mut Self) }
2828                }
2829            }
2830            #[cfg(portable_atomic_no_const_mut_refs)]
2831            doc_comment! {
2832                concat!("Creates a new reference to an atomic integer from a pointer.
2833
2834This is `const fn` on Rust 1.83+.
2835
2836# Safety
2837
2838* `ptr` must be aligned to `align_of::<", stringify!($atomic_type), ">()` (note that on some platforms this
2839  can be bigger than `align_of::<", stringify!($int_type), ">()`).
2840* `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`.
2841* If this atomic type is [lock-free](Self::is_lock_free), non-atomic accesses to the value
2842  behind `ptr` must have a happens-before relationship with atomic accesses via
2843  the returned value (or vice-versa).
2844  * In other words, time periods where the value is accessed atomically may not
2845    overlap with periods where the value is accessed non-atomically.
2846  * This requirement is trivially satisfied if `ptr` is never used non-atomically
2847    for the duration of lifetime `'a`. Most use cases should be able to follow
2848    this guideline.
2849  * This requirement is also trivially satisfied if all accesses (atomic or not) are
2850    done from the same thread.
2851* If this atomic type is *not* lock-free:
2852  * Any accesses to the value behind `ptr` must have a happens-before relationship
2853    with accesses via the returned value (or vice-versa).
2854  * Any concurrent accesses to the value behind `ptr` for the duration of lifetime `'a` must
2855    be compatible with operations performed by this atomic type.
2856* This method must not be used to create overlapping or mixed-size atomic
2857  accesses, as these are not supported by the memory model.
2858
2859[valid]: core::ptr#safety"),
2860                #[inline]
2861                #[must_use]
2862                pub unsafe fn from_ptr<'a>(ptr: *mut $int_type) -> &'a Self {
2863                    #[allow(clippy::cast_ptr_alignment)]
2864                    // SAFETY: guaranteed by the caller
2865                    unsafe { &*(ptr as *mut Self) }
2866                }
2867            }
2868
2869            doc_comment! {
2870                concat!("Returns `true` if operations on values of this type are lock-free.
2871
2872If the compiler or the platform doesn't support the necessary
2873atomic instructions, global locks for every potentially
2874concurrent atomic operation will be used.
2875
2876# Examples
2877
2878```
2879use portable_atomic::", stringify!($atomic_type), ";
2880
2881let is_lock_free = ", stringify!($atomic_type), "::is_lock_free();
2882```"),
2883                #[inline]
2884                #[must_use]
2885                pub fn is_lock_free() -> bool {
2886                    <imp::$atomic_type>::is_lock_free()
2887                }
2888            }
2889
2890            doc_comment! {
2891                concat!("Returns `true` if operations on values of this type are lock-free.
2892
2893If the compiler or the platform doesn't support the necessary
2894atomic instructions, global locks for every potentially
2895concurrent atomic operation will be used.
2896
2897**Note:** If the atomic operation relies on dynamic CPU feature detection,
2898this type may be lock-free even if the function returns false.
2899
2900# Examples
2901
2902```
2903use portable_atomic::", stringify!($atomic_type), ";
2904
2905const IS_ALWAYS_LOCK_FREE: bool = ", stringify!($atomic_type), "::is_always_lock_free();
2906```"),
2907                #[inline]
2908                #[must_use]
2909                pub const fn is_always_lock_free() -> bool {
2910                    <imp::$atomic_type>::IS_ALWAYS_LOCK_FREE
2911                }
2912            }
2913            #[cfg(test)]
2914            const IS_ALWAYS_LOCK_FREE: bool = Self::is_always_lock_free();
2915
2916            #[cfg(not(portable_atomic_no_const_mut_refs))]
2917            doc_comment! {
2918                concat!("Returns a mutable reference to the underlying integer.\n
2919This is safe because the mutable reference guarantees that no other threads are
2920concurrently accessing the atomic data.
2921
2922This is `const fn` on Rust 1.83+.
2923
2924# Examples
2925
2926```
2927use portable_atomic::{", stringify!($atomic_type), ", Ordering};
2928
2929let mut some_var = ", stringify!($atomic_type), "::new(10);
2930assert_eq!(*some_var.get_mut(), 10);
2931*some_var.get_mut() = 5;
2932assert_eq!(some_var.load(Ordering::SeqCst), 5);
2933```"),
2934                #[inline]
2935                pub const fn get_mut(&mut self) -> &mut $int_type {
2936                    // SAFETY: the mutable reference guarantees unique ownership.
2937                    // (core::sync::atomic::Atomic*::get_mut is not const yet)
2938                    unsafe { &mut *self.as_ptr() }
2939                }
2940            }
2941            #[cfg(portable_atomic_no_const_mut_refs)]
2942            doc_comment! {
2943                concat!("Returns a mutable reference to the underlying integer.\n
2944This is safe because the mutable reference guarantees that no other threads are
2945concurrently accessing the atomic data.
2946
2947This is `const fn` on Rust 1.83+.
2948
2949# Examples
2950
2951```
2952use portable_atomic::{", stringify!($atomic_type), ", Ordering};
2953
2954let mut some_var = ", stringify!($atomic_type), "::new(10);
2955assert_eq!(*some_var.get_mut(), 10);
2956*some_var.get_mut() = 5;
2957assert_eq!(some_var.load(Ordering::SeqCst), 5);
2958```"),
2959                #[inline]
2960                pub fn get_mut(&mut self) -> &mut $int_type {
2961                    // SAFETY: the mutable reference guarantees unique ownership.
2962                    unsafe { &mut *self.as_ptr() }
2963                }
2964            }
2965
2966            // TODO: Add from_mut/get_mut_slice/from_mut_slice once it is stable on std atomic types.
2967            // https://github.com/rust-lang/rust/issues/76314
2968
2969            #[cfg(not(portable_atomic_no_const_transmute))]
2970            doc_comment! {
2971                concat!("Consumes the atomic and returns the contained value.
2972
2973This is safe because passing `self` by value guarantees that no other threads are
2974concurrently accessing the atomic data.
2975
2976This is `const fn` on Rust 1.56+.
2977
2978# Examples
2979
2980```
2981use portable_atomic::", stringify!($atomic_type), ";
2982
2983let some_var = ", stringify!($atomic_type), "::new(5);
2984assert_eq!(some_var.into_inner(), 5);
2985```"),
2986                #[inline]
2987                pub const fn into_inner(self) -> $int_type {
2988                    // SAFETY: $atomic_type and $int_type have the same size and in-memory representations,
2989                    // so they can be safely transmuted.
2990                    // (const UnsafeCell::into_inner is unstable)
2991                    unsafe { core::mem::transmute(self) }
2992                }
2993            }
2994            #[cfg(portable_atomic_no_const_transmute)]
2995            doc_comment! {
2996                concat!("Consumes the atomic and returns the contained value.
2997
2998This is safe because passing `self` by value guarantees that no other threads are
2999concurrently accessing the atomic data.
3000
3001This is `const fn` on Rust 1.56+.
3002
3003# Examples
3004
3005```
3006use portable_atomic::", stringify!($atomic_type), ";
3007
3008let some_var = ", stringify!($atomic_type), "::new(5);
3009assert_eq!(some_var.into_inner(), 5);
3010```"),
3011                #[inline]
3012                pub fn into_inner(self) -> $int_type {
3013                    // SAFETY: $atomic_type and $int_type have the same size and in-memory representations,
3014                    // so they can be safely transmuted.
3015                    // (const UnsafeCell::into_inner is unstable)
3016                    unsafe { core::mem::transmute(self) }
3017                }
3018            }
3019
3020            doc_comment! {
3021                concat!("Loads a value from the atomic integer.
3022
3023`load` takes an [`Ordering`] argument which describes the memory ordering of this operation.
3024Possible values are [`SeqCst`], [`Acquire`] and [`Relaxed`].
3025
3026# Panics
3027
3028Panics if `order` is [`Release`] or [`AcqRel`].
3029
3030# Examples
3031
3032```
3033use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3034
3035let some_var = ", stringify!($atomic_type), "::new(5);
3036
3037assert_eq!(some_var.load(Ordering::Relaxed), 5);
3038```"),
3039                #[inline]
3040                #[cfg_attr(
3041                    any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
3042                    track_caller
3043                )]
3044                pub fn load(&self, order: Ordering) -> $int_type {
3045                    self.inner.load(order)
3046                }
3047            }
3048
3049            doc_comment! {
3050                concat!("Stores a value into the atomic integer.
3051
3052`store` takes an [`Ordering`] argument which describes the memory ordering of this operation.
3053Possible values are [`SeqCst`], [`Release`] and [`Relaxed`].
3054
3055# Panics
3056
3057Panics if `order` is [`Acquire`] or [`AcqRel`].
3058
3059# Examples
3060
3061```
3062use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3063
3064let some_var = ", stringify!($atomic_type), "::new(5);
3065
3066some_var.store(10, Ordering::Relaxed);
3067assert_eq!(some_var.load(Ordering::Relaxed), 10);
3068```"),
3069                #[inline]
3070                #[cfg_attr(
3071                    any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
3072                    track_caller
3073                )]
3074                pub fn store(&self, val: $int_type, order: Ordering) {
3075                    self.inner.store(val, order)
3076                }
3077            }
3078
3079            cfg_has_atomic_cas_or_amo32! {
3080            $cfg_has_atomic_cas_or_amo32_or_8! {
3081            doc_comment! {
3082                concat!("Stores a value into the atomic integer, returning the previous value.
3083
3084`swap` takes an [`Ordering`] argument which describes the memory ordering
3085of this operation. All ordering modes are possible. Note that using
3086[`Acquire`] makes the store part of this operation [`Relaxed`], and
3087using [`Release`] makes the load part [`Relaxed`].
3088
3089# Examples
3090
3091```
3092use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3093
3094let some_var = ", stringify!($atomic_type), "::new(5);
3095
3096assert_eq!(some_var.swap(10, Ordering::Relaxed), 5);
3097```"),
3098                #[inline]
3099                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3100                pub fn swap(&self, val: $int_type, order: Ordering) -> $int_type {
3101                    self.inner.swap(val, order)
3102                }
3103            }
3104            } // $cfg_has_atomic_cas_or_amo32_or_8!
3105
3106            cfg_has_atomic_cas! {
3107            doc_comment! {
3108                concat!("Stores a value into the atomic integer if the current value is the same as
3109the `current` value.
3110
3111The return value is a result indicating whether the new value was written and
3112containing the previous value. On success this value is guaranteed to be equal to
3113`current`.
3114
3115`compare_exchange` takes two [`Ordering`] arguments to describe the memory
3116ordering of this operation. `success` describes the required ordering for the
3117read-modify-write operation that takes place if the comparison with `current` succeeds.
3118`failure` describes the required ordering for the load operation that takes place when
3119the comparison fails. Using [`Acquire`] as success ordering makes the store part
3120of this operation [`Relaxed`], and using [`Release`] makes the successful load
3121[`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
3122
3123# Panics
3124
3125Panics if `failure` is [`Release`], [`AcqRel`].
3126
3127# Examples
3128
3129```
3130use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3131
3132let some_var = ", stringify!($atomic_type), "::new(5);
3133
3134assert_eq!(
3135    some_var.compare_exchange(5, 10, Ordering::Acquire, Ordering::Relaxed),
3136    Ok(5),
3137);
3138assert_eq!(some_var.load(Ordering::Relaxed), 10);
3139
3140assert_eq!(
3141    some_var.compare_exchange(6, 12, Ordering::SeqCst, Ordering::Acquire),
3142    Err(10),
3143);
3144assert_eq!(some_var.load(Ordering::Relaxed), 10);
3145```"),
3146                #[cfg_attr(docsrs, doc(alias = "compare_and_swap"))]
3147                #[inline]
3148                #[cfg_attr(
3149                    any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
3150                    track_caller
3151                )]
3152                pub fn compare_exchange(
3153                    &self,
3154                    current: $int_type,
3155                    new: $int_type,
3156                    success: Ordering,
3157                    failure: Ordering,
3158                ) -> Result<$int_type, $int_type> {
3159                    self.inner.compare_exchange(current, new, success, failure)
3160                }
3161            }
3162
3163            doc_comment! {
3164                concat!("Stores a value into the atomic integer if the current value is the same as
3165the `current` value.
3166Unlike [`compare_exchange`](Self::compare_exchange)
3167this function is allowed to spuriously fail even
3168when the comparison succeeds, which can result in more efficient code on some
3169platforms. The return value is a result indicating whether the new value was
3170written and containing the previous value.
3171
3172`compare_exchange_weak` takes two [`Ordering`] arguments to describe the memory
3173ordering of this operation. `success` describes the required ordering for the
3174read-modify-write operation that takes place if the comparison with `current` succeeds.
3175`failure` describes the required ordering for the load operation that takes place when
3176the comparison fails. Using [`Acquire`] as success ordering makes the store part
3177of this operation [`Relaxed`], and using [`Release`] makes the successful load
3178[`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
3179
3180# Panics
3181
3182Panics if `failure` is [`Release`], [`AcqRel`].
3183
3184# Examples
3185
3186```
3187use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3188
3189let val = ", stringify!($atomic_type), "::new(4);
3190
3191let mut old = val.load(Ordering::Relaxed);
3192loop {
3193    let new = old * 2;
3194    match val.compare_exchange_weak(old, new, Ordering::SeqCst, Ordering::Relaxed) {
3195        Ok(_) => break,
3196        Err(x) => old = x,
3197    }
3198}
3199```"),
3200                #[cfg_attr(docsrs, doc(alias = "compare_and_swap"))]
3201                #[inline]
3202                #[cfg_attr(
3203                    any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
3204                    track_caller
3205                )]
3206                pub fn compare_exchange_weak(
3207                    &self,
3208                    current: $int_type,
3209                    new: $int_type,
3210                    success: Ordering,
3211                    failure: Ordering,
3212                ) -> Result<$int_type, $int_type> {
3213                    self.inner.compare_exchange_weak(current, new, success, failure)
3214                }
3215            }
3216            } // cfg_has_atomic_cas!
3217
3218            $cfg_has_atomic_cas_or_amo32_or_8! {
3219            doc_comment! {
3220                concat!("Adds to the current value, returning the previous value.
3221
3222This operation wraps around on overflow.
3223
3224`fetch_add` takes an [`Ordering`] argument which describes the memory ordering
3225of this operation. All ordering modes are possible. Note that using
3226[`Acquire`] makes the store part of this operation [`Relaxed`], and
3227using [`Release`] makes the load part [`Relaxed`].
3228
3229# Examples
3230
3231```
3232use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3233
3234let foo = ", stringify!($atomic_type), "::new(0);
3235assert_eq!(foo.fetch_add(10, Ordering::SeqCst), 0);
3236assert_eq!(foo.load(Ordering::SeqCst), 10);
3237```"),
3238                #[inline]
3239                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3240                pub fn fetch_add(&self, val: $int_type, order: Ordering) -> $int_type {
3241                    self.inner.fetch_add(val, order)
3242                }
3243            }
3244
3245            doc_comment! {
3246                concat!("Adds to the current value.
3247
3248This operation wraps around on overflow.
3249
3250Unlike `fetch_add`, this does not return the previous value.
3251
3252`add` takes an [`Ordering`] argument which describes the memory ordering
3253of this operation. All ordering modes are possible. Note that using
3254[`Acquire`] makes the store part of this operation [`Relaxed`], and
3255using [`Release`] makes the load part [`Relaxed`].
3256
3257This function may generate more efficient code than `fetch_add` on some platforms.
3258
3259- MSP430: `add` instead of disabling interrupts ({8,16}-bit atomics)
3260
3261# Examples
3262
3263```
3264use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3265
3266let foo = ", stringify!($atomic_type), "::new(0);
3267foo.add(10, Ordering::SeqCst);
3268assert_eq!(foo.load(Ordering::SeqCst), 10);
3269```"),
3270                #[inline]
3271                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3272                pub fn add(&self, val: $int_type, order: Ordering) {
3273                    self.inner.add(val, order);
3274                }
3275            }
3276
3277            doc_comment! {
3278                concat!("Subtracts from the current value, returning the previous value.
3279
3280This operation wraps around on overflow.
3281
3282`fetch_sub` takes an [`Ordering`] argument which describes the memory ordering
3283of this operation. All ordering modes are possible. Note that using
3284[`Acquire`] makes the store part of this operation [`Relaxed`], and
3285using [`Release`] makes the load part [`Relaxed`].
3286
3287# Examples
3288
3289```
3290use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3291
3292let foo = ", stringify!($atomic_type), "::new(20);
3293assert_eq!(foo.fetch_sub(10, Ordering::SeqCst), 20);
3294assert_eq!(foo.load(Ordering::SeqCst), 10);
3295```"),
3296                #[inline]
3297                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3298                pub fn fetch_sub(&self, val: $int_type, order: Ordering) -> $int_type {
3299                    self.inner.fetch_sub(val, order)
3300                }
3301            }
3302
3303            doc_comment! {
3304                concat!("Subtracts from the current value.
3305
3306This operation wraps around on overflow.
3307
3308Unlike `fetch_sub`, this does not return the previous value.
3309
3310`sub` takes an [`Ordering`] argument which describes the memory ordering
3311of this operation. All ordering modes are possible. Note that using
3312[`Acquire`] makes the store part of this operation [`Relaxed`], and
3313using [`Release`] makes the load part [`Relaxed`].
3314
3315This function may generate more efficient code than `fetch_sub` on some platforms.
3316
3317- MSP430: `sub` instead of disabling interrupts ({8,16}-bit atomics)
3318
3319# Examples
3320
3321```
3322use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3323
3324let foo = ", stringify!($atomic_type), "::new(20);
3325foo.sub(10, Ordering::SeqCst);
3326assert_eq!(foo.load(Ordering::SeqCst), 10);
3327```"),
3328                #[inline]
3329                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3330                pub fn sub(&self, val: $int_type, order: Ordering) {
3331                    self.inner.sub(val, order);
3332                }
3333            }
3334            } // $cfg_has_atomic_cas_or_amo32_or_8!
3335
3336            doc_comment! {
3337                concat!("Bitwise \"and\" with the current value.
3338
3339Performs a bitwise \"and\" operation on the current value and the argument `val`, and
3340sets the new value to the result.
3341
3342Returns the previous value.
3343
3344`fetch_and` takes an [`Ordering`] argument which describes the memory ordering
3345of this operation. All ordering modes are possible. Note that using
3346[`Acquire`] makes the store part of this operation [`Relaxed`], and
3347using [`Release`] makes the load part [`Relaxed`].
3348
3349# Examples
3350
3351```
3352use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3353
3354let foo = ", stringify!($atomic_type), "::new(0b101101);
3355assert_eq!(foo.fetch_and(0b110011, Ordering::SeqCst), 0b101101);
3356assert_eq!(foo.load(Ordering::SeqCst), 0b100001);
3357```"),
3358                #[inline]
3359                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3360                pub fn fetch_and(&self, val: $int_type, order: Ordering) -> $int_type {
3361                    self.inner.fetch_and(val, order)
3362                }
3363            }
3364
3365            doc_comment! {
3366                concat!("Bitwise \"and\" with the current value.
3367
3368Performs a bitwise \"and\" operation on the current value and the argument `val`, and
3369sets the new value to the result.
3370
3371Unlike `fetch_and`, this does not return the previous value.
3372
3373`and` takes an [`Ordering`] argument which describes the memory ordering
3374of this operation. All ordering modes are possible. Note that using
3375[`Acquire`] makes the store part of this operation [`Relaxed`], and
3376using [`Release`] makes the load part [`Relaxed`].
3377
3378This function may generate more efficient code than `fetch_and` on some platforms.
3379
3380- x86/x86_64: `lock and` instead of `cmpxchg` loop ({8,16,32}-bit atomics on x86, but additionally 64-bit atomics on x86_64)
3381- MSP430: `and` instead of disabling interrupts ({8,16}-bit atomics)
3382
3383Note: On x86/x86_64, the use of either function should not usually
3384affect the generated code, because LLVM can properly optimize the case
3385where the result is unused.
3386
3387# Examples
3388
3389```
3390use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3391
3392let foo = ", stringify!($atomic_type), "::new(0b101101);
3393assert_eq!(foo.fetch_and(0b110011, Ordering::SeqCst), 0b101101);
3394assert_eq!(foo.load(Ordering::SeqCst), 0b100001);
3395```"),
3396                #[inline]
3397                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3398                pub fn and(&self, val: $int_type, order: Ordering) {
3399                    self.inner.and(val, order);
3400                }
3401            }
3402
3403            cfg_has_atomic_cas! {
3404            doc_comment! {
3405                concat!("Bitwise \"nand\" with the current value.
3406
3407Performs a bitwise \"nand\" operation on the current value and the argument `val`, and
3408sets the new value to the result.
3409
3410Returns the previous value.
3411
3412`fetch_nand` takes an [`Ordering`] argument which describes the memory ordering
3413of this operation. All ordering modes are possible. Note that using
3414[`Acquire`] makes the store part of this operation [`Relaxed`], and
3415using [`Release`] makes the load part [`Relaxed`].
3416
3417# Examples
3418
3419```
3420use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3421
3422let foo = ", stringify!($atomic_type), "::new(0x13);
3423assert_eq!(foo.fetch_nand(0x31, Ordering::SeqCst), 0x13);
3424assert_eq!(foo.load(Ordering::SeqCst), !(0x13 & 0x31));
3425```"),
3426                #[inline]
3427                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3428                pub fn fetch_nand(&self, val: $int_type, order: Ordering) -> $int_type {
3429                    self.inner.fetch_nand(val, order)
3430                }
3431            }
3432            } // cfg_has_atomic_cas!
3433
3434            doc_comment! {
3435                concat!("Bitwise \"or\" with the current value.
3436
3437Performs a bitwise \"or\" operation on the current value and the argument `val`, and
3438sets the new value to the result.
3439
3440Returns the previous value.
3441
3442`fetch_or` takes an [`Ordering`] argument which describes the memory ordering
3443of this operation. All ordering modes are possible. Note that using
3444[`Acquire`] makes the store part of this operation [`Relaxed`], and
3445using [`Release`] makes the load part [`Relaxed`].
3446
3447# Examples
3448
3449```
3450use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3451
3452let foo = ", stringify!($atomic_type), "::new(0b101101);
3453assert_eq!(foo.fetch_or(0b110011, Ordering::SeqCst), 0b101101);
3454assert_eq!(foo.load(Ordering::SeqCst), 0b111111);
3455```"),
3456                #[inline]
3457                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3458                pub fn fetch_or(&self, val: $int_type, order: Ordering) -> $int_type {
3459                    self.inner.fetch_or(val, order)
3460                }
3461            }
3462
3463            doc_comment! {
3464                concat!("Bitwise \"or\" with the current value.
3465
3466Performs a bitwise \"or\" operation on the current value and the argument `val`, and
3467sets the new value to the result.
3468
3469Unlike `fetch_or`, this does not return the previous value.
3470
3471`or` takes an [`Ordering`] argument which describes the memory ordering
3472of this operation. All ordering modes are possible. Note that using
3473[`Acquire`] makes the store part of this operation [`Relaxed`], and
3474using [`Release`] makes the load part [`Relaxed`].
3475
3476This function may generate more efficient code than `fetch_or` on some platforms.
3477
3478- x86/x86_64: `lock or` instead of `cmpxchg` loop ({8,16,32}-bit atomics on x86, but additionally 64-bit atomics on x86_64)
3479- MSP430: `or` instead of disabling interrupts ({8,16}-bit atomics)
3480
3481Note: On x86/x86_64, the use of either function should not usually
3482affect the generated code, because LLVM can properly optimize the case
3483where the result is unused.
3484
3485# Examples
3486
3487```
3488use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3489
3490let foo = ", stringify!($atomic_type), "::new(0b101101);
3491assert_eq!(foo.fetch_or(0b110011, Ordering::SeqCst), 0b101101);
3492assert_eq!(foo.load(Ordering::SeqCst), 0b111111);
3493```"),
3494                #[inline]
3495                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3496                pub fn or(&self, val: $int_type, order: Ordering) {
3497                    self.inner.or(val, order);
3498                }
3499            }
3500
3501            doc_comment! {
3502                concat!("Bitwise \"xor\" with the current value.
3503
3504Performs a bitwise \"xor\" operation on the current value and the argument `val`, and
3505sets the new value to the result.
3506
3507Returns the previous value.
3508
3509`fetch_xor` takes an [`Ordering`] argument which describes the memory ordering
3510of this operation. All ordering modes are possible. Note that using
3511[`Acquire`] makes the store part of this operation [`Relaxed`], and
3512using [`Release`] makes the load part [`Relaxed`].
3513
3514# Examples
3515
3516```
3517use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3518
3519let foo = ", stringify!($atomic_type), "::new(0b101101);
3520assert_eq!(foo.fetch_xor(0b110011, Ordering::SeqCst), 0b101101);
3521assert_eq!(foo.load(Ordering::SeqCst), 0b011110);
3522```"),
3523                #[inline]
3524                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3525                pub fn fetch_xor(&self, val: $int_type, order: Ordering) -> $int_type {
3526                    self.inner.fetch_xor(val, order)
3527                }
3528            }
3529
3530            doc_comment! {
3531                concat!("Bitwise \"xor\" with the current value.
3532
3533Performs a bitwise \"xor\" operation on the current value and the argument `val`, and
3534sets the new value to the result.
3535
3536Unlike `fetch_xor`, this does not return the previous value.
3537
3538`xor` takes an [`Ordering`] argument which describes the memory ordering
3539of this operation. All ordering modes are possible. Note that using
3540[`Acquire`] makes the store part of this operation [`Relaxed`], and
3541using [`Release`] makes the load part [`Relaxed`].
3542
3543This function may generate more efficient code than `fetch_xor` on some platforms.
3544
3545- x86/x86_64: `lock xor` instead of `cmpxchg` loop ({8,16,32}-bit atomics on x86, but additionally 64-bit atomics on x86_64)
3546- MSP430: `xor` instead of disabling interrupts ({8,16}-bit atomics)
3547
3548Note: On x86/x86_64, the use of either function should not usually
3549affect the generated code, because LLVM can properly optimize the case
3550where the result is unused.
3551
3552# Examples
3553
3554```
3555use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3556
3557let foo = ", stringify!($atomic_type), "::new(0b101101);
3558foo.xor(0b110011, Ordering::SeqCst);
3559assert_eq!(foo.load(Ordering::SeqCst), 0b011110);
3560```"),
3561                #[inline]
3562                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3563                pub fn xor(&self, val: $int_type, order: Ordering) {
3564                    self.inner.xor(val, order);
3565                }
3566            }
3567
3568            cfg_has_atomic_cas! {
3569            doc_comment! {
3570                concat!("Fetches the value, and applies a function to it that returns an optional
3571new value. Returns a `Result` of `Ok(previous_value)` if the function returned `Some(_)`, else
3572`Err(previous_value)`.
3573
3574Note: This may call the function multiple times if the value has been changed from other threads in
3575the meantime, as long as the function returns `Some(_)`, but the function will have been applied
3576only once to the stored value.
3577
3578`fetch_update` takes two [`Ordering`] arguments to describe the memory ordering of this operation.
3579The first describes the required ordering for when the operation finally succeeds while the second
3580describes the required ordering for loads. These correspond to the success and failure orderings of
3581[`compare_exchange`](Self::compare_exchange) respectively.
3582
3583Using [`Acquire`] as success ordering makes the store part
3584of this operation [`Relaxed`], and using [`Release`] makes the final successful load
3585[`Relaxed`]. The (failed) load ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
3586
3587# Panics
3588
3589Panics if `fetch_order` is [`Release`], [`AcqRel`].
3590
3591# Considerations
3592
3593This method is not magic; it is not provided by the hardware.
3594It is implemented in terms of [`compare_exchange_weak`](Self::compare_exchange_weak),
3595and suffers from the same drawbacks.
3596In particular, this method will not circumvent the [ABA Problem].
3597
3598[ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
3599
3600# Examples
3601
3602```
3603use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3604
3605let x = ", stringify!($atomic_type), "::new(7);
3606assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(7));
3607assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(x + 1)), Ok(7));
3608assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(x + 1)), Ok(8));
3609assert_eq!(x.load(Ordering::SeqCst), 9);
3610```"),
3611                #[inline]
3612                #[cfg_attr(
3613                    any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
3614                    track_caller
3615                )]
3616                pub fn fetch_update<F>(
3617                    &self,
3618                    set_order: Ordering,
3619                    fetch_order: Ordering,
3620                    mut f: F,
3621                ) -> Result<$int_type, $int_type>
3622                where
3623                    F: FnMut($int_type) -> Option<$int_type>,
3624                {
3625                    let mut prev = self.load(fetch_order);
3626                    while let Some(next) = f(prev) {
3627                        match self.compare_exchange_weak(prev, next, set_order, fetch_order) {
3628                            x @ Ok(_) => return x,
3629                            Err(next_prev) => prev = next_prev,
3630                        }
3631                    }
3632                    Err(prev)
3633                }
3634            }
3635            } // cfg_has_atomic_cas!
3636
3637            $cfg_has_atomic_cas_or_amo32_or_8! {
3638            doc_comment! {
3639                concat!("Maximum with the current value.
3640
3641Finds the maximum of the current value and the argument `val`, and
3642sets the new value to the result.
3643
3644Returns the previous value.
3645
3646`fetch_max` takes an [`Ordering`] argument which describes the memory ordering
3647of this operation. All ordering modes are possible. Note that using
3648[`Acquire`] makes the store part of this operation [`Relaxed`], and
3649using [`Release`] makes the load part [`Relaxed`].
3650
3651# Examples
3652
3653```
3654use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3655
3656let foo = ", stringify!($atomic_type), "::new(23);
3657assert_eq!(foo.fetch_max(42, Ordering::SeqCst), 23);
3658assert_eq!(foo.load(Ordering::SeqCst), 42);
3659```
3660
3661If you want to obtain the maximum value in one step, you can use the following:
3662
3663```
3664use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3665
3666let foo = ", stringify!($atomic_type), "::new(23);
3667let bar = 42;
3668let max_foo = foo.fetch_max(bar, Ordering::SeqCst).max(bar);
3669assert!(max_foo == 42);
3670```"),
3671                #[inline]
3672                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3673                pub fn fetch_max(&self, val: $int_type, order: Ordering) -> $int_type {
3674                    self.inner.fetch_max(val, order)
3675                }
3676            }
3677
3678            doc_comment! {
3679                concat!("Minimum with the current value.
3680
3681Finds the minimum of the current value and the argument `val`, and
3682sets the new value to the result.
3683
3684Returns the previous value.
3685
3686`fetch_min` takes an [`Ordering`] argument which describes the memory ordering
3687of this operation. All ordering modes are possible. Note that using
3688[`Acquire`] makes the store part of this operation [`Relaxed`], and
3689using [`Release`] makes the load part [`Relaxed`].
3690
3691# Examples
3692
3693```
3694use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3695
3696let foo = ", stringify!($atomic_type), "::new(23);
3697assert_eq!(foo.fetch_min(42, Ordering::Relaxed), 23);
3698assert_eq!(foo.load(Ordering::Relaxed), 23);
3699assert_eq!(foo.fetch_min(22, Ordering::Relaxed), 23);
3700assert_eq!(foo.load(Ordering::Relaxed), 22);
3701```
3702
3703If you want to obtain the minimum value in one step, you can use the following:
3704
3705```
3706use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3707
3708let foo = ", stringify!($atomic_type), "::new(23);
3709let bar = 12;
3710let min_foo = foo.fetch_min(bar, Ordering::SeqCst).min(bar);
3711assert_eq!(min_foo, 12);
3712```"),
3713                #[inline]
3714                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3715                pub fn fetch_min(&self, val: $int_type, order: Ordering) -> $int_type {
3716                    self.inner.fetch_min(val, order)
3717                }
3718            }
3719            } // $cfg_has_atomic_cas_or_amo32_or_8!
3720
3721            doc_comment! {
3722                concat!("Sets the bit at the specified bit-position to 1.
3723
3724Returns `true` if the specified bit was previously set to 1.
3725
3726`bit_set` takes an [`Ordering`] argument which describes the memory ordering
3727of this operation. All ordering modes are possible. Note that using
3728[`Acquire`] makes the store part of this operation [`Relaxed`], and
3729using [`Release`] makes the load part [`Relaxed`].
3730
3731This corresponds to x86's `lock bts`, and the implementation calls them on x86/x86_64.
3732
3733# Examples
3734
3735```
3736use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3737
3738let foo = ", stringify!($atomic_type), "::new(0b0000);
3739assert!(!foo.bit_set(0, Ordering::Relaxed));
3740assert_eq!(foo.load(Ordering::Relaxed), 0b0001);
3741assert!(foo.bit_set(0, Ordering::Relaxed));
3742assert_eq!(foo.load(Ordering::Relaxed), 0b0001);
3743```"),
3744                #[inline]
3745                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3746                pub fn bit_set(&self, bit: u32, order: Ordering) -> bool {
3747                    self.inner.bit_set(bit, order)
3748                }
3749            }
3750
3751            doc_comment! {
3752                concat!("Clears the bit at the specified bit-position to 1.
3753
3754Returns `true` if the specified bit was previously set to 1.
3755
3756`bit_clear` takes an [`Ordering`] argument which describes the memory ordering
3757of this operation. All ordering modes are possible. Note that using
3758[`Acquire`] makes the store part of this operation [`Relaxed`], and
3759using [`Release`] makes the load part [`Relaxed`].
3760
3761This corresponds to x86's `lock btr`, and the implementation calls them on x86/x86_64.
3762
3763# Examples
3764
3765```
3766use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3767
3768let foo = ", stringify!($atomic_type), "::new(0b0001);
3769assert!(foo.bit_clear(0, Ordering::Relaxed));
3770assert_eq!(foo.load(Ordering::Relaxed), 0b0000);
3771```"),
3772                #[inline]
3773                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3774                pub fn bit_clear(&self, bit: u32, order: Ordering) -> bool {
3775                    self.inner.bit_clear(bit, order)
3776                }
3777            }
3778
3779            doc_comment! {
3780                concat!("Toggles the bit at the specified bit-position.
3781
3782Returns `true` if the specified bit was previously set to 1.
3783
3784`bit_toggle` takes an [`Ordering`] argument which describes the memory ordering
3785of this operation. All ordering modes are possible. Note that using
3786[`Acquire`] makes the store part of this operation [`Relaxed`], and
3787using [`Release`] makes the load part [`Relaxed`].
3788
3789This corresponds to x86's `lock btc`, and the implementation calls them on x86/x86_64.
3790
3791# Examples
3792
3793```
3794use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3795
3796let foo = ", stringify!($atomic_type), "::new(0b0000);
3797assert!(!foo.bit_toggle(0, Ordering::Relaxed));
3798assert_eq!(foo.load(Ordering::Relaxed), 0b0001);
3799assert!(foo.bit_toggle(0, Ordering::Relaxed));
3800assert_eq!(foo.load(Ordering::Relaxed), 0b0000);
3801```"),
3802                #[inline]
3803                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3804                pub fn bit_toggle(&self, bit: u32, order: Ordering) -> bool {
3805                    self.inner.bit_toggle(bit, order)
3806                }
3807            }
3808
3809            doc_comment! {
3810                concat!("Logical negates the current value, and sets the new value to the result.
3811
3812Returns the previous value.
3813
3814`fetch_not` takes an [`Ordering`] argument which describes the memory ordering
3815of this operation. All ordering modes are possible. Note that using
3816[`Acquire`] makes the store part of this operation [`Relaxed`], and
3817using [`Release`] makes the load part [`Relaxed`].
3818
3819# Examples
3820
3821```
3822use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3823
3824let foo = ", stringify!($atomic_type), "::new(0);
3825assert_eq!(foo.fetch_not(Ordering::Relaxed), 0);
3826assert_eq!(foo.load(Ordering::Relaxed), !0);
3827```"),
3828                #[inline]
3829                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3830                pub fn fetch_not(&self, order: Ordering) -> $int_type {
3831                    self.inner.fetch_not(order)
3832                }
3833            }
3834
3835            doc_comment! {
3836                concat!("Logical negates the current value, and sets the new value to the result.
3837
3838Unlike `fetch_not`, this does not return the previous value.
3839
3840`not` takes an [`Ordering`] argument which describes the memory ordering
3841of this operation. All ordering modes are possible. Note that using
3842[`Acquire`] makes the store part of this operation [`Relaxed`], and
3843using [`Release`] makes the load part [`Relaxed`].
3844
3845This function may generate more efficient code than `fetch_not` on some platforms.
3846
3847- x86/x86_64: `lock not` instead of `cmpxchg` loop ({8,16,32}-bit atomics on x86, but additionally 64-bit atomics on x86_64)
3848- MSP430: `inv` instead of disabling interrupts ({8,16}-bit atomics)
3849
3850# Examples
3851
3852```
3853use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3854
3855let foo = ", stringify!($atomic_type), "::new(0);
3856foo.not(Ordering::Relaxed);
3857assert_eq!(foo.load(Ordering::Relaxed), !0);
3858```"),
3859                #[inline]
3860                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3861                pub fn not(&self, order: Ordering) {
3862                    self.inner.not(order);
3863                }
3864            }
3865
3866            cfg_has_atomic_cas! {
3867            doc_comment! {
3868                concat!("Negates the current value, and sets the new value to the result.
3869
3870Returns the previous value.
3871
3872`fetch_neg` takes an [`Ordering`] argument which describes the memory ordering
3873of this operation. All ordering modes are possible. Note that using
3874[`Acquire`] makes the store part of this operation [`Relaxed`], and
3875using [`Release`] makes the load part [`Relaxed`].
3876
3877# Examples
3878
3879```
3880use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3881
3882let foo = ", stringify!($atomic_type), "::new(5);
3883assert_eq!(foo.fetch_neg(Ordering::Relaxed), 5);
3884assert_eq!(foo.load(Ordering::Relaxed), 5_", stringify!($int_type), ".wrapping_neg());
3885assert_eq!(foo.fetch_neg(Ordering::Relaxed), 5_", stringify!($int_type), ".wrapping_neg());
3886assert_eq!(foo.load(Ordering::Relaxed), 5);
3887```"),
3888                #[inline]
3889                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3890                pub fn fetch_neg(&self, order: Ordering) -> $int_type {
3891                    self.inner.fetch_neg(order)
3892                }
3893            }
3894
3895            doc_comment! {
3896                concat!("Negates the current value, and sets the new value to the result.
3897
3898Unlike `fetch_neg`, this does not return the previous value.
3899
3900`neg` takes an [`Ordering`] argument which describes the memory ordering
3901of this operation. All ordering modes are possible. Note that using
3902[`Acquire`] makes the store part of this operation [`Relaxed`], and
3903using [`Release`] makes the load part [`Relaxed`].
3904
3905This function may generate more efficient code than `fetch_neg` on some platforms.
3906
3907- x86/x86_64: `lock neg` instead of `cmpxchg` loop ({8,16,32}-bit atomics on x86, but additionally 64-bit atomics on x86_64)
3908
3909# Examples
3910
3911```
3912use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3913
3914let foo = ", stringify!($atomic_type), "::new(5);
3915foo.neg(Ordering::Relaxed);
3916assert_eq!(foo.load(Ordering::Relaxed), 5_", stringify!($int_type), ".wrapping_neg());
3917foo.neg(Ordering::Relaxed);
3918assert_eq!(foo.load(Ordering::Relaxed), 5);
3919```"),
3920                #[inline]
3921                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3922                pub fn neg(&self, order: Ordering) {
3923                    self.inner.neg(order);
3924                }
3925            }
3926            } // cfg_has_atomic_cas!
3927            } // cfg_has_atomic_cas_or_amo32!
3928
3929            const_fn! {
3930                const_if: #[cfg(not(portable_atomic_no_const_raw_ptr_deref))];
3931                /// Returns a mutable pointer to the underlying integer.
3932                ///
3933                /// Returning an `*mut` pointer from a shared reference to this atomic is
3934                /// safe because the atomic types work with interior mutability. Any use of
3935                /// the returned raw pointer requires an `unsafe` block and has to uphold
3936                /// the safety requirements. If there is concurrent access, note the following
3937                /// additional safety requirements:
3938                ///
3939                /// - If this atomic type is [lock-free](Self::is_lock_free), any concurrent
3940                ///   operations on it must be atomic.
3941                /// - Otherwise, any concurrent operations on it must be compatible with
3942                ///   operations performed by this atomic type.
3943                ///
3944                /// This is `const fn` on Rust 1.58+.
3945                #[inline]
3946                pub const fn as_ptr(&self) -> *mut $int_type {
3947                    self.inner.as_ptr()
3948                }
3949            }
3950        }
3951        // See https://github.com/taiki-e/portable-atomic/issues/180
3952        #[cfg(not(feature = "require-cas"))]
3953        cfg_no_atomic_cas! {
3954        #[doc(hidden)]
3955        #[allow(unused_variables, clippy::unused_self, clippy::extra_unused_lifetimes)]
3956        impl<'a> $atomic_type {
3957            $cfg_no_atomic_cas_or_amo32_or_8! {
3958            #[inline]
3959            pub fn swap(&self, val: $int_type, order: Ordering) -> $int_type
3960            where
3961                &'a Self: HasSwap,
3962            {
3963                unimplemented!()
3964            }
3965            } // $cfg_no_atomic_cas_or_amo32_or_8!
3966            #[inline]
3967            pub fn compare_exchange(
3968                &self,
3969                current: $int_type,
3970                new: $int_type,
3971                success: Ordering,
3972                failure: Ordering,
3973            ) -> Result<$int_type, $int_type>
3974            where
3975                &'a Self: HasCompareExchange,
3976            {
3977                unimplemented!()
3978            }
3979            #[inline]
3980            pub fn compare_exchange_weak(
3981                &self,
3982                current: $int_type,
3983                new: $int_type,
3984                success: Ordering,
3985                failure: Ordering,
3986            ) -> Result<$int_type, $int_type>
3987            where
3988                &'a Self: HasCompareExchangeWeak,
3989            {
3990                unimplemented!()
3991            }
3992            $cfg_no_atomic_cas_or_amo32_or_8! {
3993            #[inline]
3994            pub fn fetch_add(&self, val: $int_type, order: Ordering) -> $int_type
3995            where
3996                &'a Self: HasFetchAdd,
3997            {
3998                unimplemented!()
3999            }
4000            #[inline]
4001            pub fn add(&self, val: $int_type, order: Ordering)
4002            where
4003                &'a Self: HasAdd,
4004            {
4005                unimplemented!()
4006            }
4007            #[inline]
4008            pub fn fetch_sub(&self, val: $int_type, order: Ordering) -> $int_type
4009            where
4010                &'a Self: HasFetchSub,
4011            {
4012                unimplemented!()
4013            }
4014            #[inline]
4015            pub fn sub(&self, val: $int_type, order: Ordering)
4016            where
4017                &'a Self: HasSub,
4018            {
4019                unimplemented!()
4020            }
4021            } // $cfg_no_atomic_cas_or_amo32_or_8!
4022            cfg_no_atomic_cas_or_amo32! {
4023            #[inline]
4024            pub fn fetch_and(&self, val: $int_type, order: Ordering) -> $int_type
4025            where
4026                &'a Self: HasFetchAnd,
4027            {
4028                unimplemented!()
4029            }
4030            #[inline]
4031            pub fn and(&self, val: $int_type, order: Ordering)
4032            where
4033                &'a Self: HasAnd,
4034            {
4035                unimplemented!()
4036            }
4037            } // cfg_no_atomic_cas_or_amo32!
4038            #[inline]
4039            pub fn fetch_nand(&self, val: $int_type, order: Ordering) -> $int_type
4040            where
4041                &'a Self: HasFetchNand,
4042            {
4043                unimplemented!()
4044            }
4045            cfg_no_atomic_cas_or_amo32! {
4046            #[inline]
4047            pub fn fetch_or(&self, val: $int_type, order: Ordering) -> $int_type
4048            where
4049                &'a Self: HasFetchOr,
4050            {
4051                unimplemented!()
4052            }
4053            #[inline]
4054            pub fn or(&self, val: $int_type, order: Ordering)
4055            where
4056                &'a Self: HasOr,
4057            {
4058                unimplemented!()
4059            }
4060            #[inline]
4061            pub fn fetch_xor(&self, val: $int_type, order: Ordering) -> $int_type
4062            where
4063                &'a Self: HasFetchXor,
4064            {
4065                unimplemented!()
4066            }
4067            #[inline]
4068            pub fn xor(&self, val: $int_type, order: Ordering)
4069            where
4070                &'a Self: HasXor,
4071            {
4072                unimplemented!()
4073            }
4074            } // cfg_no_atomic_cas_or_amo32!
4075            #[inline]
4076            pub fn fetch_update<F>(
4077                &self,
4078                set_order: Ordering,
4079                fetch_order: Ordering,
4080                f: F,
4081            ) -> Result<$int_type, $int_type>
4082            where
4083                F: FnMut($int_type) -> Option<$int_type>,
4084                &'a Self: HasFetchUpdate,
4085            {
4086                unimplemented!()
4087            }
4088            $cfg_no_atomic_cas_or_amo32_or_8! {
4089            #[inline]
4090            pub fn fetch_max(&self, val: $int_type, order: Ordering) -> $int_type
4091            where
4092                &'a Self: HasFetchMax,
4093            {
4094                unimplemented!()
4095            }
4096            #[inline]
4097            pub fn fetch_min(&self, val: $int_type, order: Ordering) -> $int_type
4098            where
4099                &'a Self: HasFetchMin,
4100            {
4101                unimplemented!()
4102            }
4103            } // $cfg_no_atomic_cas_or_amo32_or_8!
4104            cfg_no_atomic_cas_or_amo32! {
4105            #[inline]
4106            pub fn bit_set(&self, bit: u32, order: Ordering) -> bool
4107            where
4108                &'a Self: HasBitSet,
4109            {
4110                unimplemented!()
4111            }
4112            #[inline]
4113            pub fn bit_clear(&self, bit: u32, order: Ordering) -> bool
4114            where
4115                &'a Self: HasBitClear,
4116            {
4117                unimplemented!()
4118            }
4119            #[inline]
4120            pub fn bit_toggle(&self, bit: u32, order: Ordering) -> bool
4121            where
4122                &'a Self: HasBitToggle,
4123            {
4124                unimplemented!()
4125            }
4126            #[inline]
4127            pub fn fetch_not(&self, order: Ordering) -> $int_type
4128            where
4129                &'a Self: HasFetchNot,
4130            {
4131                unimplemented!()
4132            }
4133            #[inline]
4134            pub fn not(&self, order: Ordering)
4135            where
4136                &'a Self: HasNot,
4137            {
4138                unimplemented!()
4139            }
4140            } // cfg_no_atomic_cas_or_amo32!
4141            #[inline]
4142            pub fn fetch_neg(&self, order: Ordering) -> $int_type
4143            where
4144                &'a Self: HasFetchNeg,
4145            {
4146                unimplemented!()
4147            }
4148            #[inline]
4149            pub fn neg(&self, order: Ordering)
4150            where
4151                &'a Self: HasNeg,
4152            {
4153                unimplemented!()
4154            }
4155        }
4156        } // cfg_no_atomic_cas!
4157        $(
4158            #[$cfg_float]
4159            atomic_int!(float,
4160                #[$cfg_float] $atomic_float_type, $float_type, $atomic_type, $int_type, $align
4161            );
4162        )?
4163    };
4164
4165    // AtomicF* impls
4166    (float,
4167        #[$cfg_float:meta]
4168        $atomic_type:ident,
4169        $float_type:ident,
4170        $atomic_int_type:ident,
4171        $int_type:ident,
4172        $align:literal
4173    ) => {
4174        doc_comment! {
4175            concat!("A floating point type which can be safely shared between threads.
4176
4177This type has the same in-memory representation as the underlying floating point type,
4178[`", stringify!($float_type), "`].
4179"
4180            ),
4181            #[cfg_attr(docsrs, doc($cfg_float))]
4182            // We can use #[repr(transparent)] here, but #[repr(C, align(N))]
4183            // will show clearer docs.
4184            #[repr(C, align($align))]
4185            pub struct $atomic_type {
4186                inner: imp::float::$atomic_type,
4187            }
4188        }
4189
4190        impl Default for $atomic_type {
4191            #[inline]
4192            fn default() -> Self {
4193                Self::new($float_type::default())
4194            }
4195        }
4196
4197        impl From<$float_type> for $atomic_type {
4198            #[inline]
4199            fn from(v: $float_type) -> Self {
4200                Self::new(v)
4201            }
4202        }
4203
4204        // UnwindSafe is implicitly implemented.
4205        #[cfg(not(portable_atomic_no_core_unwind_safe))]
4206        impl core::panic::RefUnwindSafe for $atomic_type {}
4207        #[cfg(all(portable_atomic_no_core_unwind_safe, feature = "std"))]
4208        impl std::panic::RefUnwindSafe for $atomic_type {}
4209
4210        impl_debug_and_serde!($atomic_type);
4211
4212        impl $atomic_type {
4213            /// Creates a new atomic float.
4214            #[inline]
4215            #[must_use]
4216            pub const fn new(v: $float_type) -> Self {
4217                static_assert_layout!($atomic_type, $float_type);
4218                Self { inner: imp::float::$atomic_type::new(v) }
4219            }
4220
4221            // TODO: update docs based on https://github.com/rust-lang/rust/pull/116762
4222            #[cfg(not(portable_atomic_no_const_mut_refs))]
4223            doc_comment! {
4224                concat!("Creates a new reference to an atomic float from a pointer.
4225
4226This is `const fn` on Rust 1.83+.
4227
4228# Safety
4229
4230* `ptr` must be aligned to `align_of::<", stringify!($atomic_type), ">()` (note that on some platforms this
4231  can be bigger than `align_of::<", stringify!($float_type), ">()`).
4232* `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`.
4233* If this atomic type is [lock-free](Self::is_lock_free), non-atomic accesses to the value
4234  behind `ptr` must have a happens-before relationship with atomic accesses via
4235  the returned value (or vice-versa).
4236  * In other words, time periods where the value is accessed atomically may not
4237    overlap with periods where the value is accessed non-atomically.
4238  * This requirement is trivially satisfied if `ptr` is never used non-atomically
4239    for the duration of lifetime `'a`. Most use cases should be able to follow
4240    this guideline.
4241  * This requirement is also trivially satisfied if all accesses (atomic or not) are
4242    done from the same thread.
4243* If this atomic type is *not* lock-free:
4244  * Any accesses to the value behind `ptr` must have a happens-before relationship
4245    with accesses via the returned value (or vice-versa).
4246  * Any concurrent accesses to the value behind `ptr` for the duration of lifetime `'a` must
4247    be compatible with operations performed by this atomic type.
4248* This method must not be used to create overlapping or mixed-size atomic
4249  accesses, as these are not supported by the memory model.
4250
4251[valid]: core::ptr#safety"),
4252                #[inline]
4253                #[must_use]
4254                pub const unsafe fn from_ptr<'a>(ptr: *mut $float_type) -> &'a Self {
4255                    #[allow(clippy::cast_ptr_alignment)]
4256                    // SAFETY: guaranteed by the caller
4257                    unsafe { &*(ptr as *mut Self) }
4258                }
4259            }
4260            #[cfg(portable_atomic_no_const_mut_refs)]
4261            doc_comment! {
4262                concat!("Creates a new reference to an atomic float from a pointer.
4263
4264This is `const fn` on Rust 1.83+.
4265
4266# Safety
4267
4268* `ptr` must be aligned to `align_of::<", stringify!($atomic_type), ">()` (note that on some platforms this
4269  can be bigger than `align_of::<", stringify!($float_type), ">()`).
4270* `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`.
4271* If this atomic type is [lock-free](Self::is_lock_free), non-atomic accesses to the value
4272  behind `ptr` must have a happens-before relationship with atomic accesses via
4273  the returned value (or vice-versa).
4274  * In other words, time periods where the value is accessed atomically may not
4275    overlap with periods where the value is accessed non-atomically.
4276  * This requirement is trivially satisfied if `ptr` is never used non-atomically
4277    for the duration of lifetime `'a`. Most use cases should be able to follow
4278    this guideline.
4279  * This requirement is also trivially satisfied if all accesses (atomic or not) are
4280    done from the same thread.
4281* If this atomic type is *not* lock-free:
4282  * Any accesses to the value behind `ptr` must have a happens-before relationship
4283    with accesses via the returned value (or vice-versa).
4284  * Any concurrent accesses to the value behind `ptr` for the duration of lifetime `'a` must
4285    be compatible with operations performed by this atomic type.
4286* This method must not be used to create overlapping or mixed-size atomic
4287  accesses, as these are not supported by the memory model.
4288
4289[valid]: core::ptr#safety"),
4290                #[inline]
4291                #[must_use]
4292                pub unsafe fn from_ptr<'a>(ptr: *mut $float_type) -> &'a Self {
4293                    #[allow(clippy::cast_ptr_alignment)]
4294                    // SAFETY: guaranteed by the caller
4295                    unsafe { &*(ptr as *mut Self) }
4296                }
4297            }
4298
4299            /// Returns `true` if operations on values of this type are lock-free.
4300            ///
4301            /// If the compiler or the platform doesn't support the necessary
4302            /// atomic instructions, global locks for every potentially
4303            /// concurrent atomic operation will be used.
4304            #[inline]
4305            #[must_use]
4306            pub fn is_lock_free() -> bool {
4307                <imp::float::$atomic_type>::is_lock_free()
4308            }
4309
4310            /// Returns `true` if operations on values of this type are lock-free.
4311            ///
4312            /// If the compiler or the platform doesn't support the necessary
4313            /// atomic instructions, global locks for every potentially
4314            /// concurrent atomic operation will be used.
4315            ///
4316            /// **Note:** If the atomic operation relies on dynamic CPU feature detection,
4317            /// this type may be lock-free even if the function returns false.
4318            #[inline]
4319            #[must_use]
4320            pub const fn is_always_lock_free() -> bool {
4321                <imp::float::$atomic_type>::IS_ALWAYS_LOCK_FREE
4322            }
4323            #[cfg(test)]
4324            const IS_ALWAYS_LOCK_FREE: bool = Self::is_always_lock_free();
4325
4326            const_fn! {
4327                const_if: #[cfg(not(portable_atomic_no_const_mut_refs))];
4328                /// Returns a mutable reference to the underlying float.
4329                ///
4330                /// This is safe because the mutable reference guarantees that no other threads are
4331                /// concurrently accessing the atomic data.
4332                ///
4333                /// This is `const fn` on Rust 1.83+.
4334                #[inline]
4335                pub const fn get_mut(&mut self) -> &mut $float_type {
4336                    // SAFETY: the mutable reference guarantees unique ownership.
4337                    unsafe { &mut *self.as_ptr() }
4338                }
4339            }
4340
4341            // TODO: Add from_mut/get_mut_slice/from_mut_slice once it is stable on std atomic types.
4342            // https://github.com/rust-lang/rust/issues/76314
4343
4344            const_fn! {
4345                const_if: #[cfg(not(portable_atomic_no_const_transmute))];
4346                /// Consumes the atomic and returns the contained value.
4347                ///
4348                /// This is safe because passing `self` by value guarantees that no other threads are
4349                /// concurrently accessing the atomic data.
4350                ///
4351                /// This is `const fn` on Rust 1.56+.
4352                #[inline]
4353                pub const fn into_inner(self) -> $float_type {
4354                    // SAFETY: $atomic_type and $float_type have the same size and in-memory representations,
4355                    // so they can be safely transmuted.
4356                    // (const UnsafeCell::into_inner is unstable)
4357                    unsafe { core::mem::transmute(self) }
4358                }
4359            }
4360
4361            /// Loads a value from the atomic float.
4362            ///
4363            /// `load` takes an [`Ordering`] argument which describes the memory ordering of this operation.
4364            /// Possible values are [`SeqCst`], [`Acquire`] and [`Relaxed`].
4365            ///
4366            /// # Panics
4367            ///
4368            /// Panics if `order` is [`Release`] or [`AcqRel`].
4369            #[inline]
4370            #[cfg_attr(
4371                any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
4372                track_caller
4373            )]
4374            pub fn load(&self, order: Ordering) -> $float_type {
4375                self.inner.load(order)
4376            }
4377
4378            /// Stores a value into the atomic float.
4379            ///
4380            /// `store` takes an [`Ordering`] argument which describes the memory ordering of this operation.
4381            ///  Possible values are [`SeqCst`], [`Release`] and [`Relaxed`].
4382            ///
4383            /// # Panics
4384            ///
4385            /// Panics if `order` is [`Acquire`] or [`AcqRel`].
4386            #[inline]
4387            #[cfg_attr(
4388                any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
4389                track_caller
4390            )]
4391            pub fn store(&self, val: $float_type, order: Ordering) {
4392                self.inner.store(val, order)
4393            }
4394
4395            cfg_has_atomic_cas_or_amo32! {
4396            /// Stores a value into the atomic float, returning the previous value.
4397            ///
4398            /// `swap` takes an [`Ordering`] argument which describes the memory ordering
4399            /// of this operation. All ordering modes are possible. Note that using
4400            /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
4401            /// using [`Release`] makes the load part [`Relaxed`].
4402            #[inline]
4403            #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4404            pub fn swap(&self, val: $float_type, order: Ordering) -> $float_type {
4405                self.inner.swap(val, order)
4406            }
4407
4408            cfg_has_atomic_cas! {
4409            /// Stores a value into the atomic float if the current value is the same as
4410            /// the `current` value.
4411            ///
4412            /// The return value is a result indicating whether the new value was written and
4413            /// containing the previous value. On success this value is guaranteed to be equal to
4414            /// `current`.
4415            ///
4416            /// `compare_exchange` takes two [`Ordering`] arguments to describe the memory
4417            /// ordering of this operation. `success` describes the required ordering for the
4418            /// read-modify-write operation that takes place if the comparison with `current` succeeds.
4419            /// `failure` describes the required ordering for the load operation that takes place when
4420            /// the comparison fails. Using [`Acquire`] as success ordering makes the store part
4421            /// of this operation [`Relaxed`], and using [`Release`] makes the successful load
4422            /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
4423            ///
4424            /// # Panics
4425            ///
4426            /// Panics if `failure` is [`Release`], [`AcqRel`].
4427            #[cfg_attr(docsrs, doc(alias = "compare_and_swap"))]
4428            #[inline]
4429            #[cfg_attr(
4430                any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
4431                track_caller
4432            )]
4433            pub fn compare_exchange(
4434                &self,
4435                current: $float_type,
4436                new: $float_type,
4437                success: Ordering,
4438                failure: Ordering,
4439            ) -> Result<$float_type, $float_type> {
4440                self.inner.compare_exchange(current, new, success, failure)
4441            }
4442
4443            /// Stores a value into the atomic float if the current value is the same as
4444            /// the `current` value.
4445            /// Unlike [`compare_exchange`](Self::compare_exchange)
4446            /// this function is allowed to spuriously fail even
4447            /// when the comparison succeeds, which can result in more efficient code on some
4448            /// platforms. The return value is a result indicating whether the new value was
4449            /// written and containing the previous value.
4450            ///
4451            /// `compare_exchange_weak` takes two [`Ordering`] arguments to describe the memory
4452            /// ordering of this operation. `success` describes the required ordering for the
4453            /// read-modify-write operation that takes place if the comparison with `current` succeeds.
4454            /// `failure` describes the required ordering for the load operation that takes place when
4455            /// the comparison fails. Using [`Acquire`] as success ordering makes the store part
4456            /// of this operation [`Relaxed`], and using [`Release`] makes the successful load
4457            /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
4458            ///
4459            /// # Panics
4460            ///
4461            /// Panics if `failure` is [`Release`], [`AcqRel`].
4462            #[cfg_attr(docsrs, doc(alias = "compare_and_swap"))]
4463            #[inline]
4464            #[cfg_attr(
4465                any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
4466                track_caller
4467            )]
4468            pub fn compare_exchange_weak(
4469                &self,
4470                current: $float_type,
4471                new: $float_type,
4472                success: Ordering,
4473                failure: Ordering,
4474            ) -> Result<$float_type, $float_type> {
4475                self.inner.compare_exchange_weak(current, new, success, failure)
4476            }
4477
4478            /// Adds to the current value, returning the previous value.
4479            ///
4480            /// This operation wraps around on overflow.
4481            ///
4482            /// `fetch_add` takes an [`Ordering`] argument which describes the memory ordering
4483            /// of this operation. All ordering modes are possible. Note that using
4484            /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
4485            /// using [`Release`] makes the load part [`Relaxed`].
4486            #[inline]
4487            #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4488            pub fn fetch_add(&self, val: $float_type, order: Ordering) -> $float_type {
4489                self.inner.fetch_add(val, order)
4490            }
4491
4492            /// Subtracts from the current value, returning the previous value.
4493            ///
4494            /// This operation wraps around on overflow.
4495            ///
4496            /// `fetch_sub` takes an [`Ordering`] argument which describes the memory ordering
4497            /// of this operation. All ordering modes are possible. Note that using
4498            /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
4499            /// using [`Release`] makes the load part [`Relaxed`].
4500            #[inline]
4501            #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4502            pub fn fetch_sub(&self, val: $float_type, order: Ordering) -> $float_type {
4503                self.inner.fetch_sub(val, order)
4504            }
4505
4506            /// Fetches the value, and applies a function to it that returns an optional
4507            /// new value. Returns a `Result` of `Ok(previous_value)` if the function returned `Some(_)`, else
4508            /// `Err(previous_value)`.
4509            ///
4510            /// Note: This may call the function multiple times if the value has been changed from other threads in
4511            /// the meantime, as long as the function returns `Some(_)`, but the function will have been applied
4512            /// only once to the stored value.
4513            ///
4514            /// `fetch_update` takes two [`Ordering`] arguments to describe the memory ordering of this operation.
4515            /// The first describes the required ordering for when the operation finally succeeds while the second
4516            /// describes the required ordering for loads. These correspond to the success and failure orderings of
4517            /// [`compare_exchange`](Self::compare_exchange) respectively.
4518            ///
4519            /// Using [`Acquire`] as success ordering makes the store part
4520            /// of this operation [`Relaxed`], and using [`Release`] makes the final successful load
4521            /// [`Relaxed`]. The (failed) load ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
4522            ///
4523            /// # Panics
4524            ///
4525            /// Panics if `fetch_order` is [`Release`], [`AcqRel`].
4526            ///
4527            /// # Considerations
4528            ///
4529            /// This method is not magic; it is not provided by the hardware.
4530            /// It is implemented in terms of [`compare_exchange_weak`](Self::compare_exchange_weak),
4531            /// and suffers from the same drawbacks.
4532            /// In particular, this method will not circumvent the [ABA Problem].
4533            ///
4534            /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
4535            #[inline]
4536            #[cfg_attr(
4537                any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
4538                track_caller
4539            )]
4540            pub fn fetch_update<F>(
4541                &self,
4542                set_order: Ordering,
4543                fetch_order: Ordering,
4544                mut f: F,
4545            ) -> Result<$float_type, $float_type>
4546            where
4547                F: FnMut($float_type) -> Option<$float_type>,
4548            {
4549                let mut prev = self.load(fetch_order);
4550                while let Some(next) = f(prev) {
4551                    match self.compare_exchange_weak(prev, next, set_order, fetch_order) {
4552                        x @ Ok(_) => return x,
4553                        Err(next_prev) => prev = next_prev,
4554                    }
4555                }
4556                Err(prev)
4557            }
4558
4559            /// Maximum with the current value.
4560            ///
4561            /// Finds the maximum of the current value and the argument `val`, and
4562            /// sets the new value to the result.
4563            ///
4564            /// Returns the previous value.
4565            ///
4566            /// `fetch_max` takes an [`Ordering`] argument which describes the memory ordering
4567            /// of this operation. All ordering modes are possible. Note that using
4568            /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
4569            /// using [`Release`] makes the load part [`Relaxed`].
4570            #[inline]
4571            #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4572            pub fn fetch_max(&self, val: $float_type, order: Ordering) -> $float_type {
4573                self.inner.fetch_max(val, order)
4574            }
4575
4576            /// Minimum with the current value.
4577            ///
4578            /// Finds the minimum of the current value and the argument `val`, and
4579            /// sets the new value to the result.
4580            ///
4581            /// Returns the previous value.
4582            ///
4583            /// `fetch_min` takes an [`Ordering`] argument which describes the memory ordering
4584            /// of this operation. All ordering modes are possible. Note that using
4585            /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
4586            /// using [`Release`] makes the load part [`Relaxed`].
4587            #[inline]
4588            #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4589            pub fn fetch_min(&self, val: $float_type, order: Ordering) -> $float_type {
4590                self.inner.fetch_min(val, order)
4591            }
4592            } // cfg_has_atomic_cas!
4593
4594            /// Negates the current value, and sets the new value to the result.
4595            ///
4596            /// Returns the previous value.
4597            ///
4598            /// `fetch_neg` takes an [`Ordering`] argument which describes the memory ordering
4599            /// of this operation. All ordering modes are possible. Note that using
4600            /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
4601            /// using [`Release`] makes the load part [`Relaxed`].
4602            #[inline]
4603            #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4604            pub fn fetch_neg(&self, order: Ordering) -> $float_type {
4605                self.inner.fetch_neg(order)
4606            }
4607
4608            /// Computes the absolute value of the current value, and sets the
4609            /// new value to the result.
4610            ///
4611            /// Returns the previous value.
4612            ///
4613            /// `fetch_abs` takes an [`Ordering`] argument which describes the memory ordering
4614            /// of this operation. All ordering modes are possible. Note that using
4615            /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
4616            /// using [`Release`] makes the load part [`Relaxed`].
4617            #[inline]
4618            #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4619            pub fn fetch_abs(&self, order: Ordering) -> $float_type {
4620                self.inner.fetch_abs(order)
4621            }
4622            } // cfg_has_atomic_cas_or_amo32!
4623
4624            #[cfg(not(portable_atomic_no_const_raw_ptr_deref))]
4625            doc_comment! {
4626                concat!("Raw transmutation to `&", stringify!($atomic_int_type), "`.
4627
4628See [`", stringify!($float_type) ,"::from_bits`] for some discussion of the
4629portability of this operation (there are almost no issues).
4630
4631This is `const fn` on Rust 1.58+."),
4632                #[inline]
4633                pub const fn as_bits(&self) -> &$atomic_int_type {
4634                    self.inner.as_bits()
4635                }
4636            }
4637            #[cfg(portable_atomic_no_const_raw_ptr_deref)]
4638            doc_comment! {
4639                concat!("Raw transmutation to `&", stringify!($atomic_int_type), "`.
4640
4641See [`", stringify!($float_type) ,"::from_bits`] for some discussion of the
4642portability of this operation (there are almost no issues).
4643
4644This is `const fn` on Rust 1.58+."),
4645                #[inline]
4646                pub fn as_bits(&self) -> &$atomic_int_type {
4647                    self.inner.as_bits()
4648                }
4649            }
4650
4651            const_fn! {
4652                const_if: #[cfg(not(portable_atomic_no_const_raw_ptr_deref))];
4653                /// Returns a mutable pointer to the underlying float.
4654                ///
4655                /// Returning an `*mut` pointer from a shared reference to this atomic is
4656                /// safe because the atomic types work with interior mutability. Any use of
4657                /// the returned raw pointer requires an `unsafe` block and has to uphold
4658                /// the safety requirements. If there is concurrent access, note the following
4659                /// additional safety requirements:
4660                ///
4661                /// - If this atomic type is [lock-free](Self::is_lock_free), any concurrent
4662                ///   operations on it must be atomic.
4663                /// - Otherwise, any concurrent operations on it must be compatible with
4664                ///   operations performed by this atomic type.
4665                ///
4666                /// This is `const fn` on Rust 1.58+.
4667                #[inline]
4668                pub const fn as_ptr(&self) -> *mut $float_type {
4669                    self.inner.as_ptr()
4670                }
4671            }
4672        }
4673        // See https://github.com/taiki-e/portable-atomic/issues/180
4674        #[cfg(not(feature = "require-cas"))]
4675        cfg_no_atomic_cas! {
4676        #[doc(hidden)]
4677        #[allow(unused_variables, clippy::unused_self, clippy::extra_unused_lifetimes)]
4678        impl<'a> $atomic_type {
4679            cfg_no_atomic_cas_or_amo32! {
4680            #[inline]
4681            pub fn swap(&self, val: $float_type, order: Ordering) -> $float_type
4682            where
4683                &'a Self: HasSwap,
4684            {
4685                unimplemented!()
4686            }
4687            } // cfg_no_atomic_cas_or_amo32!
4688            #[inline]
4689            pub fn compare_exchange(
4690                &self,
4691                current: $float_type,
4692                new: $float_type,
4693                success: Ordering,
4694                failure: Ordering,
4695            ) -> Result<$float_type, $float_type>
4696            where
4697                &'a Self: HasCompareExchange,
4698            {
4699                unimplemented!()
4700            }
4701            #[inline]
4702            pub fn compare_exchange_weak(
4703                &self,
4704                current: $float_type,
4705                new: $float_type,
4706                success: Ordering,
4707                failure: Ordering,
4708            ) -> Result<$float_type, $float_type>
4709            where
4710                &'a Self: HasCompareExchangeWeak,
4711            {
4712                unimplemented!()
4713            }
4714            #[inline]
4715            pub fn fetch_add(&self, val: $float_type, order: Ordering) -> $float_type
4716            where
4717                &'a Self: HasFetchAdd,
4718            {
4719                unimplemented!()
4720            }
4721            #[inline]
4722            pub fn fetch_sub(&self, val: $float_type, order: Ordering) -> $float_type
4723            where
4724                &'a Self: HasFetchSub,
4725            {
4726                unimplemented!()
4727            }
4728            #[inline]
4729            pub fn fetch_update<F>(
4730                &self,
4731                set_order: Ordering,
4732                fetch_order: Ordering,
4733                f: F,
4734            ) -> Result<$float_type, $float_type>
4735            where
4736                F: FnMut($float_type) -> Option<$float_type>,
4737                &'a Self: HasFetchUpdate,
4738            {
4739                unimplemented!()
4740            }
4741            #[inline]
4742            pub fn fetch_max(&self, val: $float_type, order: Ordering) -> $float_type
4743            where
4744                &'a Self: HasFetchMax,
4745            {
4746                unimplemented!()
4747            }
4748            #[inline]
4749            pub fn fetch_min(&self, val: $float_type, order: Ordering) -> $float_type
4750            where
4751                &'a Self: HasFetchMin,
4752            {
4753                unimplemented!()
4754            }
4755            cfg_no_atomic_cas_or_amo32! {
4756            #[inline]
4757            pub fn fetch_neg(&self, order: Ordering) -> $float_type
4758            where
4759                &'a Self: HasFetchNeg,
4760            {
4761                unimplemented!()
4762            }
4763            #[inline]
4764            pub fn fetch_abs(&self, order: Ordering) -> $float_type
4765            where
4766                &'a Self: HasFetchAbs,
4767            {
4768                unimplemented!()
4769            }
4770            } // cfg_no_atomic_cas_or_amo32!
4771        }
4772        } // cfg_no_atomic_cas!
4773    };
4774}
4775
4776cfg_has_atomic_ptr! {
4777    #[cfg(target_pointer_width = "16")]
4778    atomic_int!(AtomicIsize, isize, 2, cfg_has_atomic_cas_or_amo8, cfg_no_atomic_cas_or_amo8);
4779    #[cfg(target_pointer_width = "16")]
4780    atomic_int!(AtomicUsize, usize, 2, cfg_has_atomic_cas_or_amo8, cfg_no_atomic_cas_or_amo8);
4781    #[cfg(target_pointer_width = "32")]
4782    atomic_int!(AtomicIsize, isize, 4, cfg_has_atomic_cas_or_amo32, cfg_no_atomic_cas_or_amo32);
4783    #[cfg(target_pointer_width = "32")]
4784    atomic_int!(AtomicUsize, usize, 4, cfg_has_atomic_cas_or_amo32, cfg_no_atomic_cas_or_amo32);
4785    #[cfg(target_pointer_width = "64")]
4786    atomic_int!(AtomicIsize, isize, 8, cfg_has_atomic_cas_or_amo32, cfg_no_atomic_cas_or_amo32);
4787    #[cfg(target_pointer_width = "64")]
4788    atomic_int!(AtomicUsize, usize, 8, cfg_has_atomic_cas_or_amo32, cfg_no_atomic_cas_or_amo32);
4789    #[cfg(target_pointer_width = "128")]
4790    atomic_int!(AtomicIsize, isize, 16, cfg_has_atomic_cas_or_amo32, cfg_no_atomic_cas_or_amo32);
4791    #[cfg(target_pointer_width = "128")]
4792    atomic_int!(AtomicUsize, usize, 16, cfg_has_atomic_cas_or_amo32, cfg_no_atomic_cas_or_amo32);
4793}
4794
4795cfg_has_atomic_8! {
4796    atomic_int!(AtomicI8, i8, 1, cfg_has_atomic_cas_or_amo8, cfg_no_atomic_cas_or_amo8);
4797    atomic_int!(AtomicU8, u8, 1, cfg_has_atomic_cas_or_amo8, cfg_no_atomic_cas_or_amo8);
4798}
4799cfg_has_atomic_16! {
4800    atomic_int!(AtomicI16, i16, 2, cfg_has_atomic_cas_or_amo8, cfg_no_atomic_cas_or_amo8);
4801    atomic_int!(AtomicU16, u16, 2, cfg_has_atomic_cas_or_amo8, cfg_no_atomic_cas_or_amo8,
4802        #[cfg(all(feature = "float", portable_atomic_unstable_f16))] AtomicF16, f16);
4803}
4804cfg_has_atomic_32! {
4805    atomic_int!(AtomicI32, i32, 4, cfg_has_atomic_cas_or_amo32, cfg_no_atomic_cas_or_amo32);
4806    atomic_int!(AtomicU32, u32, 4, cfg_has_atomic_cas_or_amo32, cfg_no_atomic_cas_or_amo32,
4807        #[cfg(feature = "float")] AtomicF32, f32);
4808}
4809cfg_has_atomic_64! {
4810    atomic_int!(AtomicI64, i64, 8, cfg_has_atomic_cas_or_amo32, cfg_no_atomic_cas_or_amo32);
4811    atomic_int!(AtomicU64, u64, 8, cfg_has_atomic_cas_or_amo32, cfg_no_atomic_cas_or_amo32,
4812        #[cfg(feature = "float")] AtomicF64, f64);
4813}
4814cfg_has_atomic_128! {
4815    atomic_int!(AtomicI128, i128, 16, cfg_has_atomic_cas_or_amo32, cfg_no_atomic_cas_or_amo32);
4816    atomic_int!(AtomicU128, u128, 16, cfg_has_atomic_cas_or_amo32, cfg_no_atomic_cas_or_amo32,
4817        #[cfg(all(feature = "float", portable_atomic_unstable_f128))] AtomicF128, f128);
4818}
4819
4820// See https://github.com/taiki-e/portable-atomic/issues/180
4821#[cfg(not(feature = "require-cas"))]
4822cfg_no_atomic_cas! {
4823cfg_no_atomic_cas_or_amo32! {
4824#[cfg(feature = "float")]
4825use self::diagnostic_helper::HasFetchAbs;
4826use self::diagnostic_helper::{
4827    HasAnd, HasBitClear, HasBitSet, HasBitToggle, HasFetchAnd, HasFetchByteAdd, HasFetchByteSub,
4828    HasFetchNot, HasFetchOr, HasFetchPtrAdd, HasFetchPtrSub, HasFetchXor, HasNot, HasOr, HasXor,
4829};
4830} // cfg_no_atomic_cas_or_amo32!
4831cfg_no_atomic_cas_or_amo8! {
4832use self::diagnostic_helper::{HasAdd, HasSub, HasSwap};
4833} // cfg_no_atomic_cas_or_amo8!
4834#[cfg_attr(not(feature = "float"), allow(unused_imports))]
4835use self::diagnostic_helper::{
4836    HasCompareExchange, HasCompareExchangeWeak, HasFetchAdd, HasFetchMax, HasFetchMin,
4837    HasFetchNand, HasFetchNeg, HasFetchSub, HasFetchUpdate, HasNeg,
4838};
4839#[cfg_attr(
4840    any(
4841        all(
4842            portable_atomic_no_atomic_load_store,
4843            not(any(
4844                target_arch = "avr",
4845                target_arch = "bpf",
4846                target_arch = "msp430",
4847                target_arch = "riscv32",
4848                target_arch = "riscv64",
4849                feature = "critical-section",
4850            )),
4851        ),
4852        not(feature = "float"),
4853    ),
4854    allow(dead_code, unreachable_pub)
4855)]
4856#[allow(unknown_lints, unnameable_types)] // Not public API. unnameable_types is available on Rust 1.79+
4857mod diagnostic_helper {
4858    cfg_no_atomic_cas_or_amo8! {
4859    #[doc(hidden)]
4860    #[cfg_attr(
4861        not(portable_atomic_no_diagnostic_namespace),
4862        diagnostic::on_unimplemented(
4863            message = "`swap` requires atomic CAS but not available on this target by default",
4864            label = "this associated function is not available on this target by default",
4865            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
4866            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
4867        )
4868    )]
4869    pub trait HasSwap {}
4870    } // cfg_no_atomic_cas_or_amo8!
4871    #[doc(hidden)]
4872    #[cfg_attr(
4873        not(portable_atomic_no_diagnostic_namespace),
4874        diagnostic::on_unimplemented(
4875            message = "`compare_exchange` requires atomic CAS but not available on this target by default",
4876            label = "this associated function is not available on this target by default",
4877            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
4878            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
4879        )
4880    )]
4881    pub trait HasCompareExchange {}
4882    #[doc(hidden)]
4883    #[cfg_attr(
4884        not(portable_atomic_no_diagnostic_namespace),
4885        diagnostic::on_unimplemented(
4886            message = "`compare_exchange_weak` requires atomic CAS but not available on this target by default",
4887            label = "this associated function is not available on this target by default",
4888            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
4889            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
4890        )
4891    )]
4892    pub trait HasCompareExchangeWeak {}
4893    #[doc(hidden)]
4894    #[cfg_attr(
4895        not(portable_atomic_no_diagnostic_namespace),
4896        diagnostic::on_unimplemented(
4897            message = "`fetch_add` requires atomic CAS but not available on this target by default",
4898            label = "this associated function is not available on this target by default",
4899            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
4900            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
4901        )
4902    )]
4903    pub trait HasFetchAdd {}
4904    cfg_no_atomic_cas_or_amo8! {
4905    #[doc(hidden)]
4906    #[cfg_attr(
4907        not(portable_atomic_no_diagnostic_namespace),
4908        diagnostic::on_unimplemented(
4909            message = "`add` requires atomic CAS but not available on this target by default",
4910            label = "this associated function is not available on this target by default",
4911            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
4912            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
4913        )
4914    )]
4915    pub trait HasAdd {}
4916    } // cfg_no_atomic_cas_or_amo8!
4917    #[doc(hidden)]
4918    #[cfg_attr(
4919        not(portable_atomic_no_diagnostic_namespace),
4920        diagnostic::on_unimplemented(
4921            message = "`fetch_sub` requires atomic CAS but not available on this target by default",
4922            label = "this associated function is not available on this target by default",
4923            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
4924            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
4925        )
4926    )]
4927    pub trait HasFetchSub {}
4928    cfg_no_atomic_cas_or_amo8! {
4929    #[doc(hidden)]
4930    #[cfg_attr(
4931        not(portable_atomic_no_diagnostic_namespace),
4932        diagnostic::on_unimplemented(
4933            message = "`sub` requires atomic CAS but not available on this target by default",
4934            label = "this associated function is not available on this target by default",
4935            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
4936            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
4937        )
4938    )]
4939    pub trait HasSub {}
4940    } // cfg_no_atomic_cas_or_amo8!
4941    cfg_no_atomic_cas_or_amo32! {
4942    #[doc(hidden)]
4943    #[cfg_attr(
4944        not(portable_atomic_no_diagnostic_namespace),
4945        diagnostic::on_unimplemented(
4946            message = "`fetch_ptr_add` requires atomic CAS but not available on this target by default",
4947            label = "this associated function is not available on this target by default",
4948            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
4949            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
4950        )
4951    )]
4952    pub trait HasFetchPtrAdd {}
4953    #[doc(hidden)]
4954    #[cfg_attr(
4955        not(portable_atomic_no_diagnostic_namespace),
4956        diagnostic::on_unimplemented(
4957            message = "`fetch_ptr_sub` requires atomic CAS but not available on this target by default",
4958            label = "this associated function is not available on this target by default",
4959            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
4960            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
4961        )
4962    )]
4963    pub trait HasFetchPtrSub {}
4964    #[doc(hidden)]
4965    #[cfg_attr(
4966        not(portable_atomic_no_diagnostic_namespace),
4967        diagnostic::on_unimplemented(
4968            message = "`fetch_byte_add` requires atomic CAS but not available on this target by default",
4969            label = "this associated function is not available on this target by default",
4970            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
4971            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
4972        )
4973    )]
4974    pub trait HasFetchByteAdd {}
4975    #[doc(hidden)]
4976    #[cfg_attr(
4977        not(portable_atomic_no_diagnostic_namespace),
4978        diagnostic::on_unimplemented(
4979            message = "`fetch_byte_sub` requires atomic CAS but not available on this target by default",
4980            label = "this associated function is not available on this target by default",
4981            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
4982            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
4983        )
4984    )]
4985    pub trait HasFetchByteSub {}
4986    #[doc(hidden)]
4987    #[cfg_attr(
4988        not(portable_atomic_no_diagnostic_namespace),
4989        diagnostic::on_unimplemented(
4990            message = "`fetch_and` requires atomic CAS but not available on this target by default",
4991            label = "this associated function is not available on this target by default",
4992            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
4993            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
4994        )
4995    )]
4996    pub trait HasFetchAnd {}
4997    #[doc(hidden)]
4998    #[cfg_attr(
4999        not(portable_atomic_no_diagnostic_namespace),
5000        diagnostic::on_unimplemented(
5001            message = "`and` requires atomic CAS but not available on this target by default",
5002            label = "this associated function is not available on this target by default",
5003            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
5004            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
5005        )
5006    )]
5007    pub trait HasAnd {}
5008    } // cfg_no_atomic_cas_or_amo32!
5009    #[doc(hidden)]
5010    #[cfg_attr(
5011        not(portable_atomic_no_diagnostic_namespace),
5012        diagnostic::on_unimplemented(
5013            message = "`fetch_nand` requires atomic CAS but not available on this target by default",
5014            label = "this associated function is not available on this target by default",
5015            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
5016            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
5017        )
5018    )]
5019    pub trait HasFetchNand {}
5020    cfg_no_atomic_cas_or_amo32! {
5021    #[doc(hidden)]
5022    #[cfg_attr(
5023        not(portable_atomic_no_diagnostic_namespace),
5024        diagnostic::on_unimplemented(
5025            message = "`fetch_or` requires atomic CAS but not available on this target by default",
5026            label = "this associated function is not available on this target by default",
5027            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
5028            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
5029        )
5030    )]
5031    pub trait HasFetchOr {}
5032    #[doc(hidden)]
5033    #[cfg_attr(
5034        not(portable_atomic_no_diagnostic_namespace),
5035        diagnostic::on_unimplemented(
5036            message = "`or` requires atomic CAS but not available on this target by default",
5037            label = "this associated function is not available on this target by default",
5038            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
5039            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
5040        )
5041    )]
5042    pub trait HasOr {}
5043    #[doc(hidden)]
5044    #[cfg_attr(
5045        not(portable_atomic_no_diagnostic_namespace),
5046        diagnostic::on_unimplemented(
5047            message = "`fetch_xor` requires atomic CAS but not available on this target by default",
5048            label = "this associated function is not available on this target by default",
5049            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
5050            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
5051        )
5052    )]
5053    pub trait HasFetchXor {}
5054    #[doc(hidden)]
5055    #[cfg_attr(
5056        not(portable_atomic_no_diagnostic_namespace),
5057        diagnostic::on_unimplemented(
5058            message = "`xor` requires atomic CAS but not available on this target by default",
5059            label = "this associated function is not available on this target by default",
5060            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
5061            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
5062        )
5063    )]
5064    pub trait HasXor {}
5065    #[doc(hidden)]
5066    #[cfg_attr(
5067        not(portable_atomic_no_diagnostic_namespace),
5068        diagnostic::on_unimplemented(
5069            message = "`fetch_not` requires atomic CAS but not available on this target by default",
5070            label = "this associated function is not available on this target by default",
5071            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
5072            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
5073        )
5074    )]
5075    pub trait HasFetchNot {}
5076    #[doc(hidden)]
5077    #[cfg_attr(
5078        not(portable_atomic_no_diagnostic_namespace),
5079        diagnostic::on_unimplemented(
5080            message = "`not` requires atomic CAS but not available on this target by default",
5081            label = "this associated function is not available on this target by default",
5082            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
5083            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
5084        )
5085    )]
5086    pub trait HasNot {}
5087    } // cfg_no_atomic_cas_or_amo32!
5088    #[doc(hidden)]
5089    #[cfg_attr(
5090        not(portable_atomic_no_diagnostic_namespace),
5091        diagnostic::on_unimplemented(
5092            message = "`fetch_neg` requires atomic CAS but not available on this target by default",
5093            label = "this associated function is not available on this target by default",
5094            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
5095            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
5096        )
5097    )]
5098    pub trait HasFetchNeg {}
5099    #[doc(hidden)]
5100    #[cfg_attr(
5101        not(portable_atomic_no_diagnostic_namespace),
5102        diagnostic::on_unimplemented(
5103            message = "`neg` requires atomic CAS but not available on this target by default",
5104            label = "this associated function is not available on this target by default",
5105            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
5106            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
5107        )
5108    )]
5109    pub trait HasNeg {}
5110    cfg_no_atomic_cas_or_amo32! {
5111    #[cfg(feature = "float")]
5112    #[cfg_attr(target_pointer_width = "16", allow(dead_code, unreachable_pub))]
5113    #[doc(hidden)]
5114    #[cfg_attr(
5115        not(portable_atomic_no_diagnostic_namespace),
5116        diagnostic::on_unimplemented(
5117            message = "`fetch_abs` requires atomic CAS but not available on this target by default",
5118            label = "this associated function is not available on this target by default",
5119            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
5120            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
5121        )
5122    )]
5123    pub trait HasFetchAbs {}
5124    } // cfg_no_atomic_cas_or_amo32!
5125    #[doc(hidden)]
5126    #[cfg_attr(
5127        not(portable_atomic_no_diagnostic_namespace),
5128        diagnostic::on_unimplemented(
5129            message = "`fetch_min` requires atomic CAS but not available on this target by default",
5130            label = "this associated function is not available on this target by default",
5131            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
5132            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
5133        )
5134    )]
5135    pub trait HasFetchMin {}
5136    #[doc(hidden)]
5137    #[cfg_attr(
5138        not(portable_atomic_no_diagnostic_namespace),
5139        diagnostic::on_unimplemented(
5140            message = "`fetch_max` requires atomic CAS but not available on this target by default",
5141            label = "this associated function is not available on this target by default",
5142            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
5143            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
5144        )
5145    )]
5146    pub trait HasFetchMax {}
5147    #[doc(hidden)]
5148    #[cfg_attr(
5149        not(portable_atomic_no_diagnostic_namespace),
5150        diagnostic::on_unimplemented(
5151            message = "`fetch_update` requires atomic CAS but not available on this target by default",
5152            label = "this associated function is not available on this target by default",
5153            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
5154            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
5155        )
5156    )]
5157    pub trait HasFetchUpdate {}
5158    cfg_no_atomic_cas_or_amo32! {
5159    #[doc(hidden)]
5160    #[cfg_attr(
5161        not(portable_atomic_no_diagnostic_namespace),
5162        diagnostic::on_unimplemented(
5163            message = "`bit_set` requires atomic CAS but not available on this target by default",
5164            label = "this associated function is not available on this target by default",
5165            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
5166            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
5167        )
5168    )]
5169    pub trait HasBitSet {}
5170    #[doc(hidden)]
5171    #[cfg_attr(
5172        not(portable_atomic_no_diagnostic_namespace),
5173        diagnostic::on_unimplemented(
5174            message = "`bit_clear` requires atomic CAS but not available on this target by default",
5175            label = "this associated function is not available on this target by default",
5176            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
5177            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
5178        )
5179    )]
5180    pub trait HasBitClear {}
5181    #[doc(hidden)]
5182    #[cfg_attr(
5183        not(portable_atomic_no_diagnostic_namespace),
5184        diagnostic::on_unimplemented(
5185            message = "`bit_toggle` requires atomic CAS but not available on this target by default",
5186            label = "this associated function is not available on this target by default",
5187            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
5188            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
5189        )
5190    )]
5191    pub trait HasBitToggle {}
5192    } // cfg_no_atomic_cas_or_amo32!
5193}
5194} // cfg_no_atomic_cas!