Specifies the set of environment variables available to actions with target 
    configuration. Variables can be either specified by <code>name</code>, in 
    which case
    the value will be taken from the invocation environment, or by the 
    <code>name=value</code> pair which sets the value independent of the 
    invocation environment. This option can be used multiple times; for options 
    given for the same variable, the latest wins, options for different 
    variables accumulate.
    <br>
    Note that unless <code>--incompatible_repo_env_ignores_action_env</code> is 
    true, all <code>name=value</code> pairs will be available to repository 
    rules.
    
    If discarding the analysis cache due to a change in the build system, 
    setting this option to false will cause bazel to exit, rather than 
    continuing with the build. This option has no effect when 
    'discard_analysis_cache' is also set.
    If true, an analysis failure of a rule target results in the target's 
    propagation of an instance of AnalysisFailureInfo containing the error 
    description, instead of resulting in a build failure.
    Specified the module versions in the form of `<module1>@<version1>,
    <module2>@<version2>` that will be allowed in the resolved dependency graph 
    even if they are declared yanked in the registry where they come from (if 
    they are not coming from a NonRegistryOverride). Otherwise, yanked versions 
    will cause the resolution to fail. You can also define allowed yanked 
    version with the `BZLMOD_ALLOW_YANKED_VERSIONS` environment variable. You 
    can disable this check by using the keyword 'all' (not recommended).
    Sets the maximum number of transitive dependencies through a rule attribute 
    with a for_analysis_testing configuration transition. Exceeding this limit 
    will result in a rule error.
    Generate AndroidX-compatible data-binding files. This is only used with 
    databinding v2. This flag is a no-op.
    Use android databinding v2 with 3.4.0 argument. This flag is a no-op.
    Determines whether C++ deps of Android rules will be linked dynamically 
    when a cc_binary does not explicitly create a shared library. 'default' 
    means bazel will choose whether to link dynamically.  'fully' means all 
    libraries will be linked dynamically. 'off' means that all libraries will 
    be linked in mostly static mode.
    Selects the manifest merger to use for android_binary rules. Flag to help 
    thetransition to the Android manifest merger from the legacy merger.
    Sets the order of manifests passed to the manifest merger for Android 
    binaries. ALPHABETICAL means manifests are sorted by path relative to the 
    execroot. ALPHABETICAL_BY_CONFIGURATION means manifests are sorted by paths 
    relative to the configuration directory within the output directory. 
    DEPENDENCY means manifests are ordered with each library's manifest coming 
    before the manifests of its dependencies.
    Sets the platforms that android_binary targets use. If multiple platforms 
    are specified, then the binary is a fat APKs, which contains native 
    binaries for each specified target platform.
    Enables resource shrinking for android_binary APKs that use ProGuard.
    The label of the crosstool package to be used in Apple and Objc rules and 
    their dependencies.
Comma-separated list of aspects to be applied to top-level targets. In the list, if aspect some_aspect specifies required aspect providers via required_aspect_providers, some_aspect will run after every aspect that was mentioned before it in the aspects list whose advertised providers satisfy some_aspect required aspect providers. Moreover, some_aspect will run after all its required aspects specified by requires attribute. some_aspect will then have access to the values of those aspects' providers. <bzl-file-label>%<aspect_name>, for example '//tools:my_def.bzl%my_aspect', where 'my_aspect' is a top-level value from a file tools/my_def.bzl
    Specifies the values of the command-line aspects parameters. Each parameter 
    value is specified via <param_name>=<param_value>, for example 
    'my_param=my_val' where 'my_param' is a parameter of some aspect in --
    aspects list or required by an aspect in the list. This option can be used 
    multiple times. However, it is not allowed to assign values to the same 
    parameter more than once.
    When printing the location part of messages, attempt to use a path relative 
    to the workspace directory or one of the directories specified by --
    package_path.
If --output_filter is not specified, then the value for this option is used create a filter automatically. Allowed values are 'none' (filter nothing / show everything), 'all' (filter everything / show nothing), 'packages' (include output from rules in packages mentioned on the Blaze command line), and 'subpackages' (like 'packages', but also include subpackages). For the 'packages' and 'subpackages' values //java/foo and //javatests/foo are treated as one package)'.
    Maximum number of open files allowed during BEP artifact upload.
    Specifies the build event service (BES) backend endpoint in the form 
    [SCHEME://]HOST[:PORT]. The default is to disable BES uploads. Supported 
    schemes are grpc and grpcs (grpc with TLS enabled). If no scheme is 
    provided, Bazel assumes grpcs.
    Sets the field check_preceding_lifecycle_events_present on 
    PublishBuildToolEventStreamRequest which tells BES to check whether it 
    previously received InvocationAttemptStarted and BuildEnqueued events 
    matching the current tool event.
    Specify a header in NAME=VALUE form that will be included in BES requests. 
    Multiple headers can be passed by specifying the flag multiple times. 
    Multiple values for the same name will be converted to a comma-separated 
    list.
    Specifies the instance name under which the BES will persist uploaded BEP. 
    Defaults to null.
    Specifies a list of notification keywords to be added the default set of 
    keywords published to BES ("command_name=<command_name> ", 
    "protocol_name=BEP"). Defaults to none.
    Specifies whether to publish BES lifecycle events. (defaults to 'true').
    Specifies how long bazel should wait for the BES/BEP upload to complete 
    while OOMing. This flag ensures termination when the JVM is severely GC 
    thrashing and cannot make progress on any user thread.
    Specifies the maximal size of stdout or stderr to be buffered in BEP, 
    before it is reported as a progress event. Individual writes are still 
    reported in a single event, even if larger than the specified value up to --
    bes_outerr_chunk_size.
    Specifies the maximal size of stdout or stderr to be sent to BEP in a 
    single message.
Connect to the Build Event Service through a proxy. Currently this flag can only be used to configure a Unix domain socket (unix:/path/to/socket).
    Specifies the base URL where a user can view the information streamed to 
    the BES backend. Bazel will output the URL appended by the invocation id to 
    the terminal.
    Specifies a list of notification keywords to be included directly, without 
    the "user_keyword=" prefix included for keywords supplied via --
    bes_keywords. Intended for Build service operators that set --
    bes_lifecycle_events=false and include keywords when calling 
    PublishLifecycleEvent. Build service operators using this flag should 
    prevent users from overriding the flag value.
    Specifies how long bazel should wait for the BES/BEP upload to complete 
    after the build and tests have finished. A valid timeout is a natural 
    number followed by a unit: Days (d), hours (h), minutes (m), seconds (s), 
    and milliseconds (ms). The default value is '0' which means that there is 
    no timeout.
    Specifies whether the Build Event Service upload should block the build 
    completion or should end the invocation immediately and finish the upload 
    in the background. Either 'wait_for_upload_complete' (default), 
    'nowait_for_upload_complete', or 'fully_async'.
    If true dex2oat action failures will cause the build to break instead of 
    executing dex2oat during test runtime.
    Execute the build; this is the usual behaviour. Specifying --nobuild causes 
    the build to stop before executing the build actions, returning zero iff 
    the package loading and analysis phases completed successfully; this mode 
    is useful for testing those phases.
    If non-empty, write a varint delimited binary representation of 
    representation of the build event protocol to that file. This option 
    implies --bes_upload_mode=wait_for_upload_complete.
    Convert paths in the binary file representation of the build event protocol 
    to more globally valid URIs whenever possible; if disabled, the file:// uri 
    scheme will always be used
--build_event_binary_file_upload_mode=<wait_for_upload_complete, nowait_for_upload_complete or fully_async>
    Specifies whether the Build Event Service upload for --
    build_event_binary_file should block the build completion or should end the 
    invocation immediately and finish the upload in the background. Either 
    'wait_for_upload_complete' (default), 'nowait_for_upload_complete', or 
    'fully_async'.
    If non-empty, write a JSON serialisation of the build event protocol to 
    that file. This option implies --bes_upload_mode=wait_for_upload_complete.
    Convert paths in the json file representation of the build event protocol 
    to more globally valid URIs whenever possible; if disabled, the file:// uri 
    scheme will always be used
--build_event_json_file_upload_mode=<wait_for_upload_complete, nowait_for_upload_complete or fully_async>
    Specifies whether the Build Event Service upload for --
    build_event_json_file should block the build completion or should end the 
    invocation immediately and finish the upload in the background. Either 
    'wait_for_upload_complete' (default), 'nowait_for_upload_complete', or 
    'fully_async'.
    The maximum number of entries for a single named_set_of_files event; values 
    smaller than 2 are ignored and no event splitting is performed. This is 
    intended for limiting the maximum event size in the build event protocol, 
    although it does not directly control event size. The total event size is a 
    function of the structure of the set as well as the file and uri lengths, 
    which may in turn depend on the hash function.
    If non-empty, write a textual representation of the build event protocol to 
    that file
    Convert paths in the text file representation of the build event protocol 
    to more globally valid URIs whenever possible; if disabled, the file:// uri 
    scheme will always be used
--build_event_text_file_upload_mode=<wait_for_upload_complete, nowait_for_upload_complete or fully_async>
    Specifies whether the Build Event Service upload for --
    build_event_text_file should block the build completion or should end the 
    invocation immediately and finish the upload in the background. Either 
    'wait_for_upload_complete' (default), 'nowait_for_upload_complete', or 
    'fully_async'.
    The maximum number of times Bazel should retry uploading a build event.
Forces test targets tagged 'manual' to be built. 'manual' tests are excluded from processing. This option forces them to be built (but not executed).
    Custom key-value string pairs to supply in a build event.
    Build python executable zip; on on Windows, off on other platforms
    If true, build runfiles symlink forests for all targets.  If false, write 
    them only when required by a local action, test or run command.
    If true, write runfiles manifests for all targets. If false, omit them. 
    Local tests will fail to run when false.
Specifies a comma-separated list of tags. Each tag can be optionally preceded with '-' to specify excluded tags. Only those targets will be built that contain at least one included tag and do not contain any excluded tags. This option does not affect the set of tests executed with the 'test' command; those are be governed by the test filtering options, for example '--test_tag_filters'
    If enabled, when building C++ tests statically and with fission the .dwp 
    file  for the test binary will be automatically built as well.
If specified, only *_test and test_suite rules will be built and other targets specified on the command line will be ignored. By default everything that was requested will be built.
If greater than 0, configures Bazel to cache file digests in memory based on their metadata instead of recomputing the digests from disk every time they are needed. Setting this to 0 ensures correctness because not all file changes can be noted from file metadata. When not 0, the number indicates the size of the cache as the number of file digests to be cached.
If set to 'auto', Bazel reruns a test if and only if: (1) Bazel detects changes in the test or its dependencies, (2) the test is marked as external, (3) multiple test runs were requested with --runs_per_test, or(4) the test previously failed. If set to 'yes', Bazel caches all test results except for tests marked as external. If set to 'no', Bazel does not cache any test results.
    Comma-separated list of architectures for which to build Apple Catalyst 
    binaries.
    Sets the suffixes of header files that a cc_proto_library creates.
    Sets the suffixes of source files that a cc_proto_library creates.
    Check bazel version compatibility of Bazel modules. Valid values are 
    `error` to escalate it to a resolution failure, `off` to disable the check, 
    or `warning` to print a warning when mismatch detected.
    If disabled, .bzl load visibility errors are demoted to warnings.
    Check if the direct `bazel_dep` dependencies declared in the root module 
    are the same versions you get in the resolved dependency graph. Valid 
    values are `off` to disable the check, `warning` to print a warning when 
    mismatch detected or `error` to escalate it to a resolution failure.
    Check that licensing constraints imposed by dependent packages do not 
    conflict with distribution modes of the targets being built. By default, 
    licenses are not checked.
    Don't run tests, just check if they are up-to-date.  If all tests results 
    are up-to-date, the testing completes successfully.  If any test needs to 
    be built or executed, an error is reported and the testing fails.  This 
    option implies --check_up_to_date behavior.
      Using this option will also add: --check_up_to_date 
    Don't perform the build, just check if it is up-to-date.  If all targets 
    are up-to-date, the build completes successfully.  If any step needs to be 
    executed an error is reported and the build fails.
    If disabled, visibility errors in target dependencies are demoted to 
    warnings.
    If specified, Bazel will instrument code (using offline instrumentation 
    where possible) and will collect coverage information during tests. Only 
    targets that  match --instrumentation_filter will be affected. Usually this 
    option should  not be specified directly - 'bazel coverage' command should 
    be used instead.
Specifies desired cumulative coverage report type. At this point only LCOV is supported.
    Specify the mode the binary will be built in. Values: 'fastbuild', 'dbg', 
    'opt'.
Compile a single dependency of the argument files. This is useful for syntax checking source files in IDEs, for example, by rebuilding a single target that depends on the source file to detect errors as early as possible in the edit/build/test cycle. This argument affects the way all non-flag arguments are interpreted; instead of being targets to build they are source filenames.  For each source filename an arbitrary target that depends on it will be built.
Selects additional config sections from the rc files; for every <command>, it also pulls in the options from <command>:<config> if such a section exists; if this section doesn't exist in any .rc file, Blaze fails with an error. The config sections and flag combinations they are equivalent to are located in the tools/*.blazerc config files.
    Location of the binary that is used to postprocess raw coverage reports. 
    This must currently be a filegroup that contains a single file, the binary. 
    Defaults to '//tools/test:lcov_merger'.
    Location of the binary that is used to generate coverage reports. This must 
    currently be a filegroup that contains a single file, the binary. Defaults 
    to '//tools/test:coverage_report_generator'.
    Location of support files that are required on the inputs of every test 
    action that collects code coverage. Defaults to '//tools/test:
    coverage_support'.
--credential_helper=<Path to a credential helper. It may be absolute, relative to the PATH environment variable, or %workspace%-relative. The path be optionally prefixed by a scope  followed by an '='. The scope is a domain name, optionally with a single leading '*' wildcard component. A helper applies to URIs matching its scope, with more specific scopes preferred. If a helper has no scope, it applies to every URI.>
Configures a credential helper conforming to the <a href="https://github.com/EngFlow/credential-helper-spec">Credential Helper Specification</a> to use for retrieving authorization credentials for  repository fetching, remote caching and execution, and the build event service.Credentials supplied by a helper take precedence over credentials supplied by `--google_default_credentials`, `--google_credentials`, a `.netrc` file, or the auth parameter to `repository_ctx.download()` and `repository_ctx.download_and_extract()`.May be specified multiple times to set up multiple helpers.See https://blog.engflow.com/2023/10/09/configuring-bazels-credential-helper/ for instructions.
The default duration for which credentials supplied by a credential helper are cached if the helper does not provide when the credentials expire.
Configures the timeout for a credential helper.Credential helpers failing to respond within this timeout will fail the invocation.
    Use CSFDO profile information to optimize compilation. Specify the absolute 
    path name of the zip file containing the profile file, a raw or an indexed 
    LLVM profile file.
    Generate binaries with context sensitive FDO instrumentation. With 
    Clang/LLVM compiler, it also accepts the directory name under which the raw 
    profile file(s) will be dumped at runtime.
      Using this option will also add: --copt=-Wno-error 
    The cs_fdo_profile representing the context sensitive profile to be used 
    for optimization.
    Specifies a custom malloc implementation. This setting overrides malloc 
    attributes in build rules.
--default_test_resources=<resource name followed by equal and 1 float or 4 float, e.g memory=10,30,60,100>
Override the default resources amount for tests. The expected format is <resource>=<value>. If a single positive number is specified as <value> it will override the default resources for all test sizes. If 4 comma-separated numbers are specified, they will override the resource amount for respectively the small, medium, large, enormous test sizes. Values can also be HOST_RAM/HOST_CPU, optionally followed by [-|*]<float> (eg. memory=HOST_RAM*.1,HOST_RAM*.2,HOST_RAM*.3,HOST_RAM*.4). The default test resources specified by this flag are overridden by explicit resources specified in tags.
    Each --define option specifies an assignment for a build variable. In case 
    of multiple values for a variable, the last one wins.
A comma-separated list of names of packages which the build system will consider non-existent, even if they are visible somewhere on the package path.Use this option when deleting a subpackage 'x/y' of an existing package 'x'.  For example, after deleting x/y/BUILD in your client, the build system may complain if it encounters a label '//x:y/z' if that is still provided by another package_path entry.  Specifying --deleted_packages x/y avoids this problem.
    Whether to include supported Java 8 libraries in apps for legacy devices.
    If set, and compilation mode is not 'opt', objc apps will include debug 
    entitlements when signing.
Discard the analysis cache immediately after the analysis phase completes. Reduces memory usage by ~10%, but makes further incremental builds slower.
A path to a directory where Bazel can read and write actions and action outputs. If the directory does not exist, it will be created.
    Additional places to search for archives before accessing the network to 
    download them.
Specify a file to configure the remote downloader with. This file consists of lines, each of which starts with a directive (`allow`, `block` or `rewrite`) followed by either a host name (for `allow` and `block`) or two patterns, one to match against, and one to use as a substitute URL, with back-references starting from `$1`. It is possible for multiple `rewrite` directives for the same URL to be give, and in this case multiple URLs will be returned.
    How many milliseconds should local execution be delayed, if remote 
    execution was faster during a build at least once?
    The local strategies, in order, to use for the given mnemonic - the first 
    applicable strategy is used. For example, `worker,sandboxed` runs actions 
    that support persistent workers using the worker strategy, and all others 
    using the sandboxed strategy. If no mnemonic is given, the list of 
    strategies is used as the fallback for all mnemonics. The default fallback 
    list is `worker,sandboxed`, or`worker,sandboxed,standalone` if 
    `experimental_local_lockfree_output` is set. Takes [mnemonic=]local_strategy
    [,local_strategy,...]
    Determines whether C++ binaries will be linked dynamically.  'default' 
    means Bazel will choose whether to link dynamically.  'fully' means all 
    libraries will be linked dynamically. 'off' means that all libraries will 
    be linked in mostly static mode.
    The remote strategies, in order, to use for the given mnemonic - the first 
    applicable strategy is used. If no mnemonic is given, the list of 
    strategies is used as the fallback for all mnemonics. The default fallback 
    list is `remote`, so this flag usually does not need to be set explicitly. 
    Takes [mnemonic=]remote_strategy[,remote_strategy,...]
    If true, enables the Bzlmod dependency management system, taking precedence 
    over WORKSPACE. See https://bazel.build/docs/bzlmod for more information.
If true, Bazel picks up host-OS-specific config lines from bazelrc files. For example, if the host OS is Linux and you run bazel build, Bazel picks up lines starting with build:linux. Supported OS identifiers are linux, macos, windows, freebsd, and openbsd. Enabling this flag is equivalent to using --config=linux on Linux, --config=windows on Windows, etc.
    If set, any use of absolute paths for propeller optimize will raise an 
    error.
    If set, any use of absolute paths for FDO will raise an error.
    Enable runfiles symlink tree; By default, it's off on Windows, on on other 
    platforms.
    If true, enables the legacy WORKSPACE system for external dependencies. See 
    https://bazel.build/external/overview for more information.
    Checks the environments each target is compatible with and reports errors 
    if any target has dependencies that don't support the same environments
Log the executed spawns into this file as length-delimited SpawnExec protos, according to src/main/protobuf/spawn.proto. Prefer --execution_log_compact_file, which is significantly smaller and cheaper to produce. Related flags: --execution_log_compact_file (compact format; mutually exclusive), --execution_log_json_file (text JSON format; mutually exclusive), --execution_log_sort (whether to sort the execution log), --subcommands (for displaying subcommands in terminal output).
Log the executed spawns into this file as length-delimited ExecLogEntry protos, according to src/main/protobuf/spawn.proto. The entire file is zstd compressed. Related flags: --execution_log_binary_file (binary protobuf format; mutually exclusive), --execution_log_json_file (text JSON format; mutually exclusive), --subcommands (for displaying subcommands in terminal output).
Log the executed spawns into this file as newline-delimited JSON representations of SpawnExec protos, according to src/main/protobuf/spawn.proto. Prefer --execution_log_compact_file, which is significantly smaller and cheaper to produce. Related flags: --execution_log_compact_file (compact format; mutually exclusive), --execution_log_binary_file (binary protobuf format; mutually exclusive), --execution_log_sort (whether to sort the execution log), --subcommands (for displaying subcommands in terminal output).
Whether to sort the execution log, making it easier to compare logs across invocations. Set to false to avoid potentially significant CPU and memory usage at the end of the invocation, at the cost of producing the log in nondeterministic execution order. Only applies to the binary and JSON formats; the compact format is never sorted.
    Expand test_suite targets into their constituent tests before analysis. 
    When this flag is turned on (the default), negative target patterns will 
    apply to the tests belonging to the test suite, otherwise they will not. 
    Turning off this flag is useful when top-level aspects are applied at 
    command line: then they can analyze test_suite targets.
    Deprecated in favor of aspects. Use action_listener to attach an 
    extra_action to existing build actions.
    List of comma-separated regular expressions, each optionally prefixed by - 
    (negative expression), assigned (=) to a list of comma-separated constraint 
    value targets. If a target matches no negative expression and at least one 
    positive expression its toolchain resolution will be performed as if it had 
    declared the constraint values as execution constraints. Example: //demo,-
    test=@platforms//cpus:x86_64 will add 'x86_64' to any target under //demo 
    except for those whose name contains 'test'.
    Use android databinding v2. This flag is a no-op.
    Enables resource shrinking for android_binary APKs that use ProGuard.
    Use dex2oat in parallel to possibly speed up android_test.
    If true, expand Filesets in the BEP when presenting output files.
    If true, fully resolve relative Fileset symlinks in the BEP when presenting 
    output files. Requires --experimental_build_event_expand_filesets.
--experimental_build_event_output_group_mode=<output group name followed by an OutputGroupFileMode, e.g. default=both>
    Specify how an output group's files will be represented in 
    TargetComplete/AspectComplete BEP events. Values are an assignment of an 
    output group name to one of 'NAMED_SET_OF_FILES_ONLY', 'INLINE_ONLY', or 
    'BOTH'. The default value is 'NAMED_SET_OF_FILES_ONLY'. If an output group 
    is repeated, the final value to appear is used. The default value sets the 
    mode for coverage artifacts to BOTH: --
    experimental_build_event_output_group_mode=baseline.lcov=both
    Initial, minimum delay for exponential backoff retries when BEP upload 
    fails. (exponent: 1.6)
    Selects how to upload artifacts referenced in the build event protocol.
    If enabled, adds a `visibility()` function that .bzl files may call during 
    top-level evaluation to set their visibility for the purpose of load() 
    statements.
    If 'on_failed' or 'on_passed, then Blaze will cancel concurrently running 
    tests on the first run with that result. This is only useful in combination 
    with --runs_per_test_detects_flakes.
    If set to true, rule attributes and Starlark API methods needed for the 
    rule cc_shared_library will be available
    Whether to double-check correct desugaring at Android binary level.
    Specifies the strategy for the circuit breaker to use. Available strategies 
    are "failure". On invalid value for the option the behavior same as the 
    option is not set.
    If specified, Bazel will also generate collect coverage information for 
    generated files.
    If enabled, the profiler collects the system's overall load average.
    If enabled, the profiler collects the Linux PSI data.
    If enabled, the profiler collects CPU and memory usage estimation for local 
    actions.
    If enabled, the profiler collects SkyFunction counts in the Skyframe graph 
    over time for key function types, like configured targets and action 
    executions. May have a performance hit as this visits the ENTIRE Skyframe 
    graph at every profiling time unit. Do not use this flag with performance-
    critical measurements.
    If enabled, the profiler collects the system's network usage.
    If enabled, the profiler collects worker's aggregated resource data.
Records a Java Flight Recorder profile for the duration of the command. One of the supported profiling event types (cpu, wall, alloc or lock) must be given as an argument. The profile is written to a file named after the event type under the output base directory. The syntax and semantics of this flag might change in the future to support additional profile types or output formats; use at your own risk.
    This flag controls how the convenience symlinks (the symlinks that appear 
    in the workspace after the build) will be managed. Possible values:
      normal (default): Each kind of convenience symlink will be created or 
    deleted, as determined by the build.
      clean: All symlinks will be unconditionally deleted.
      ignore: Symlinks will not be created or cleaned up.
      log_only: Generate log messages as if 'normal' were passed, but don't 
    actually perform any filesystem operations (useful for tools).
    Note that only symlinks whose names are generated by the current value of --
    symlink_prefix can be affected; if the prefix changes, any pre-existing 
    symlinks will be left alone.
    This flag controls whether or not we will post the build 
    eventConvenienceSymlinksIdentified to the BuildEventProtocol. If the value 
    is true, the BuildEventProtocol will have an entry for 
    convenienceSymlinksIdentified, listing all of the convenience symlinks 
    created in your workspace. If false, then the convenienceSymlinksIdentified 
    entry in the BuildEventProtocol will be empty.
    Enables the experimental local execution scheduling based on CPU load, not 
    estimation of actions one by one.  Experimental scheduling have showed the 
    large benefit on a large local builds on a powerful machines with the large 
    number of cores. Reccommended to use with --local_resources=cpu=HOST_CPUS
    If set to true, the auto-generated //external package will not be available 
    anymore. Bazel will still be unable to parse the file 'external/BUILD', but 
    globs reaching into external/ from the unnamed package will work.
How long the server must remain idle before a garbage collection of the disk cache occurs. To specify the garbage collection policy, set --experimental_disk_cache_gc_max_size and/or --experimental_disk_cache_gc_max_age.
If set to a positive value, the disk cache will be periodically garbage collected to remove entries older than this age. If set in conjunction with --experimental_disk_cache_gc_max_size, both criteria are applied. Garbage collection occurrs in the background once the server has become idle, as determined by the --experimental_disk_cache_gc_idle_delay flag.
--experimental_disk_cache_gc_max_size=<size in bytes, optionally followed by a K, M, G or T multiplier>
If set to a positive value, the disk cache will be periodically garbage collected to stay under this size. If set in conjunction with --experimental_disk_cache_gc_max_age, both criteria are applied. Garbage collection occurrs in the background once the server has become idle, as determined by the --experimental_disk_cache_gc_idle_delay flag.
    Specify a Docker image name (e.g. "ubuntu:latest") that should be used to 
    execute a sandboxed action when using the docker strategy and the action 
    itself doesn't already have a container-image attribute in its 
    remote_execution_properties in the platform description. The value of this 
    flag is passed verbatim to 'docker run', so it supports the same syntax and 
    mechanisms as Docker itself.
    If enabled, Bazel will pass the --privileged flag to 'docker run' when 
    running actions. This might be required by your build, but it might also 
    result in reduced hermeticity.
    If enabled, injects the uid and gid of the current user into the Docker 
    image before using it. This is required if your build / tests depend on the 
    user having a name and home directory inside the container. This is on by 
    default, but you can disable it in case the automatic image customization 
    feature doesn't work in your case or you know that you don't need it.
    If enabled, Bazel will print more verbose messages about the Docker sandbox 
    strategy.
     If set to true, attr.label(materializer=), attr
    (for_dependency_resolution=), attr.dormant_label(), attr.
    dormant_label_list() and rule(for_dependency_resolution=) are allowed.
    When set, targets that are build "for tool" are not subject to dynamic 
    execution. Such targets are extremely unlikely to be built incrementally 
    and thus not worth spending local cycles on.
    Takes a list of OS signal numbers. If a local branch of dynamic execution 
    gets killed with any of these signals, the remote branch will be allowed to 
    finish instead. For persistent workers, this only affects signals that kill 
    the worker process.
    Controls how much load from dynamic execution to put on the local machine. 
    This flag adjusts how many actions in dynamic execution we will schedule 
    concurrently. It is based on the number of CPUs Blaze thinks is available, 
    which can be controlled with the --local_cpu_resources flag.
    If this flag is 0, all actions are scheduled locally immediately. If > 0, 
    the amount of actions scheduled locally is limited by the number of CPUs 
    available. If < 1, the load factor is used to reduce the number of locally 
    scheduled actions when the number of actions waiting to schedule is high. 
    This lessens the load on the local machine in the clean build case, where 
    the local machine does not contribute much.
    If >0, the time a dynamically run action must run remote-only before we 
    prioritize its local execution to avoid remote timeouts. This may hide some 
    problems on the remote execution system. Do not turn this on without 
    monitoring of remote execution issues.
    If set to true, enables the APIs required to support the Android Starlark 
    migration.
    Enable Docker-based sandboxing. This option has no effect if Docker is not 
    installed.
    If set to true, enables the `macro()` construct for defining symbolic 
    macros.
    If set to true, .scl files may be used in load() statements.
    If true, enable the use of --experimental_working_set to reduce Bazel's 
    memory footprint for incremental builds. This feature is known as Skyfocus.
    If true, enable the set data type and set() constructor in Starlark.
--experimental_extra_action_filter=<comma-separated list of regex expressions with prefix '-' specifying excluded paths>
Deprecated in favor of aspects. Filters set of targets to schedule extra_actions for.
Deprecated in favor of aspects. Only schedules extra_actions for top level targets.
    If true, then Bazel fetches the entire coverage data directory for each 
    test during a coverage run.
    Filter the ProGuard ProgramJar to remove any classes also present in the 
    LibraryJar.
    If true, coverage for clang will generate an LCOV report.
    If set to true, exposes a number of experimental pieces of Starlark build 
    API pertaining to Google legacy code.
    If set, add a "requires-xcode:{version}" execution requirement to every 
    Xcode action.  If the Xcode version has a hyphenated label,  also add a 
    "requires-xcode-label:{version_label}" execution requirement.
    If enabled, C++ .d files will be passed through in memory directly from the 
    remote build nodes instead of being written to disk.
    If enabled, the dependency (.jdeps) files generated from Java compilations 
    will be passed through in memory directly from the remote build nodes 
    instead of being written to disk.
    If set to true, the contents of stashed sandboxes for 
    reuse_sandbox_directories will be tracked in memory. This reduces the 
    amount of I/O needed during reuse. Depending on the build this flag may 
    improve wall time. Depending on the build as well this flag may use a 
    significant amount of additional memory.
    Whether to make direct filesystem calls to create symlink trees instead of 
    delegating to a helper process.
    How long an install base must go unused before it's eligible for garbage 
    collection. If nonzero, the server will attempt to garbage collect other 
    install bases when idle.
    If true, enables the <code>isolate</code> parameter in the <a href="https:
    //bazel.build/rules/lib/globals/module#use_extension"
    ><code>use_extension</code></a> function.
    Whether to generate J2ObjC header map in parallel of J2ObjC transpilation.
    Whether to generate with shorter header path (uses "_ios" instead of 
    "_j2objc").
Enables reduced classpaths for Java compilations.
    Use separate outputs for header and regular compilation.
    If enabled, experimental_java_library_export_do_not_use module is available.
    No-op, kept only for backwards compatibility
    If materializing param files, do so with direct writes to disk.
    Uses these strings as objc fastbuild compiler options.
    If true, use libunwind for stack unwinding, and compile with -fomit-frame-
    pointer and -fasynchronous-unwind-tables.
    When enabled, enforce that a java_binary rule can't contain more than one 
    version of the same class file on the classpath. This enforcement can break 
    the build, or can just result in warnings.
    Which model to use for where in the output tree rules write their outputs, 
    particularly for multi-platform / multi-configuration builds. This is 
    highly experimental. See https://github.com/bazelbuild/bazel/issues/6526 
    for details. Starlark actions canopt into path mapping by adding the key 
    'supports-path-mapping' to the 'execution_requirements' dict.
    Each entry should be of the form label=value where label refers to a 
    platform and values is the desired shortname to use in the output path. 
    Only used when --experimental_platform_in_output_dir is true. Has highest 
    naming priority.
    Enable persistent aar extractor by using workers.
    If true, a shortname for the target platform is used in the output 
    directory name instead of the CPU. The exact scheme is experimental and 
    subject to change: First, in the rare case the --platforms option does not 
    have exactly one value, a hash of the platforms option is used. Next, if 
    any shortname for the current platform was registered by --
    experimental_override_name_platform_in_output_dir, then that shortname is 
    used. Then, if --experimental_use_platforms_in_output_dir_legacy_heuristic 
    is set, use a shortname based off the current platform Label. Finally, a 
    hash of the platform option is used as a last resort.
    If set to true, enables a number of platform-related Starlark APIs useful 
    for debugging.
    If true, use the most recent Xcode that is available both locally and 
    remotely. If false, or if there are no mutual available versions, use the 
    local Xcode version selected via xcode-select.
--experimental_profile_additional_tasks=<phase, action, discover_inputs, action_check, action_lock, action_update, action_complete, action_rewinding, bzlmod, info, create_package, remote_execution, local_execution, scanner, local_parse, upload_time, remote_process_time, remote_queue, remote_setup, fetch, local_process_time, vfs_stat, vfs_dir, vfs_readlink, vfs_md5, vfs_xattr, vfs_delete, vfs_open, vfs_read, vfs_write, vfs_glob, vfs_vmfs_stat, vfs_vmfs_dir, vfs_vmfs_read, wait, thread_name, thread_sort_index, skyframe_eval, skyfunction, critical_path, critical_path_component, handle_gc_notification, local_action_counts, starlark_parser, starlark_user_fn, starlark_builtin_fn, starlark_user_compiled_fn, starlark_repository_fn, action_fs_staging, remote_cache_check, remote_download, remote_network, filesystem_traversal, worker_execution, worker_setup, worker_borrow, worker_working, worker_copying_outputs, credential_helper, conflict_check, dynamic_lock, repository_fetch, repository_vendor, repo_cache_gc_wait, spawn_log, wasm_load, wasm_exec or unknown>
    Specifies additional profile tasks to be included in the profile.
    Includes the extra "out" attribute in action events that contains the exec 
    path to the action's primary output.
    Includes target configuration hash in action events' JSON profile data.
    Includes target label in action events' JSON profile data.
    Run extra actions for alternative Java api versions in a proto_library.
    py_binary targets include their label even when stamping is disabled.
Controls the output of BEP ActionSummary and BuildGraphMetrics, limiting the number of mnemonics in ActionData and number of entries reported in BuildGraphMetrics.AspectCount/RuleClassCount. By default the number of types is limited to the top 20, by number of executed actions for ActionData, and instances for RuleClass and Asepcts. Setting this option will write statistics for all mnemonics, rule classes and aspects.
Controls the output of BEP BuildGraphMetrics, including expensiveto compute skyframe metrics about Skykeys, RuleClasses and Aspects.With this flag set to false BuildGraphMetrics.rule_count and aspectfields will not be populated in the BEP.
    Whether to make source manifest actions remotable
The minimum blob size required to compress/decompress with zstd. Ineffectual unless --remote_cache_compression is set.
    The maximum number of attempts to retry if the build encountered a 
    transient remote cache error that would otherwise fail the build. Applies 
    for example when artifacts are evicted from the remote cache, or in certain 
    cache failure conditions. A non-zero value will implicitly set --
    incompatible_remote_use_new_exit_code_for_lost_inputs to true. A new 
    invocation id will be generated for each attempt. If you generate 
    invocation id and provide it to Bazel with --invocation_id, you should not 
    use this flag. Instead, set flag --
    incompatible_remote_use_new_exit_code_for_lost_inputs and check for the 
    exit code 39.
If set to true, Bazel will extend the lease for outputs of remote actions during the build by sending `FindMissingBlobs` calls periodically to remote cache. The frequency is based on the value of `--experimental_remote_cache_ttl`.
    The guaranteed minimal TTL of blobs in the remote cache after their digests 
    are recently referenced e.g. by an ActionResult or FindMissingBlobs. Bazel 
    does several optimizations based on the blobs' TTL e.g. doesn't repeatedly 
    call GetActionResult in an incremental build. The value should be set 
    slightly less than the real TTL since there is a gap between when the 
    server returns the digests and when Bazel receives them.
A path to a directory where the corrupted outputs will be captured to.
If set to true, discard in-memory copies of the input root's Merkle tree and associated input mappings during calls to GetActionResult() and Execute(). This reduces memory usage significantly, but does require Bazel to recompute them upon remote cache misses and retries.
A Remote Asset API endpoint URI, to be used as a remote download proxy. The supported schemas are grpc, grpcs (grpc with TLS enabled) and unix (local UNIX sockets). If no schema is provided Bazel will default to grpcs. See: https://github.com/bazelbuild/remote-apis/blob/master/build/bazel/remote/asset/v1/remote_asset.proto
Whether to fall back to the local downloader if remote downloader fails.
Whether to propagate credentials from netrc and credential helper to the remote downloader server. The server implementation needs to support the new `http_header_url:<url-index>:<header-key>` qualifier where the `<url-index>` is a 0-based position of the URL inside the FetchBlobRequest's `uris` field. The URL-specific headers should take precedence over the global headers.
Whether to use keepalive for remote execution calls.
    Sets the allowed number of failure rate in percentage for a specific time 
    window after which it stops calling to the remote cache/executor. By 
    default the value is 10. Setting this to 0 means no limitation.
    The interval in which the failure rate of the remote requests are computed. 
    On zero or negative value the failure duration is computed the whole 
    duration of the execution.Following units can be used: Days (d), hours (h), 
    minutes (m), seconds (s), and milliseconds (ms). If the unit is omitted, 
    the value is interpreted as seconds.
If set to true, Bazel will mark inputs as tool inputs for the remote executor. This can be used to implement remote persistent workers.
If set to true, Merkle tree calculations will be memoized to improve the remote cache hit checking speed. The memory foot print of the cache is controlled by --experimental_remote_merkle_tree_cache_size.
The number of Merkle trees to memoize to improve the remote cache hit checking speed. Even though the cache is automatically pruned according to Java's handling of soft references, out-of-memory errors can occur if set too high. If set to 0  the cache size is unlimited. Optimal value varies depending on project's size. Default to 1000.
HOST or HOST:PORT of a remote output service endpoint. The supported schemas are grpc, grpcs (grpc with TLS enabled) and unix (local UNIX sockets). If no schema is provided Bazel will default to grpcs. Specify grpc:// or unix: schema to disable TLS.
The path under which the contents of output directories managed by the --experimental_remote_output_service are placed. The actual output directory used by a build will be a descendant of this path and determined by the output service.
If set to true, enforce that all actions that can run remotely are cached, or else fail the build. This is useful to troubleshoot non-determinism issues as it allows checking whether actions that should be cached are actually cached without spuriously injecting new results into the cache.
Enables remote cache key scrubbing with the supplied configuration file, which must be a protocol buffer in text format (see src/main/protobuf/remote_scrubbing.proto).This feature is intended to facilitate sharing a remote/disk cache between actions executing on different platforms but targeting the same platform. It should be used with extreme care, as improper settings may cause accidental sharing of cache entries and result in incorrect builds.Scrubbing does not affect how an action is executed, only how its remote/disk cache key is computed for the purpose of retrieving or storing an action result. Scrubbed actions are incompatible with remote execution, and will always be executed locally instead.Modifying the scrubbing configuration does not invalidate outputs present in the local filesystem or internal caches; a clean build is required to reexecute affected actions.In order to successfully use this feature, you likely want to set a custom --host_platform together with --experimental_platform_in_output_dir (to normalize output prefixes) and --incompatible_strict_action_env (to normalize environment variables).
    If set to true, repository_rule gains some remote execution capabilities.
    If set, the repository cache will hardlink the file in case of a cache hit, 
    rather than copying. This is intended to save disk space.
    If true enables the repository_ctx `load_wasm` and `execute_wasm` methods.
    The maximum number of attempts to retry a download error. If set to 0, 
    retries are disabled.
    If non-empty, write a Starlark value with the resolved information of all 
    Starlark repository rules that were executed.
    If non-empty read the specified resolved file instead of the WORKSPACE file
    When enabled, --trim_test_configuration will not trim the test 
    configuration for rules marked testonly=1. This is meant to reduce action 
    conflict issues when non-test rules depend on cc_test rules. No effect if --
    trim_test_configuration is false.
    Enable experimental rule extension API and subrule APIs
    Whether to include the command-line residue in run build events which could 
    contain the residue. By default, the residue is not included in run command 
    build events that could contain the residue.
--experimental_sandbox_async_tree_delete_idle_threads=<integer, or a keyword ("auto", "HOST_CPUS", "HOST_RAM"), optionally followed by an operation ([-|*]<float>) eg. "auto", "HOST_CPUS*.5">
    If 0, delete sandbox trees as soon as an action completes (causing 
    completion of the action to be delayed). If greater than zero, execute the 
    deletion of such threes on an asynchronous thread pool that has size 1 when 
    the build is running and grows to the size specified by this flag when the 
    server is idle.
    If true, actions whose mnemonic matches the input regex will have their 
    resources request enforced as limits, overriding the value of --
    experimental_sandbox_limits, if the resource type supports it. For example 
    a test that declares cpu:3 and resources:memory:10, will run with at most 3 
    cpus and 10 megabytes of memory.
--experimental_sandbox_limits=<named double, 'name=value', where value is an integer, or a keyword ("auto", "HOST_CPUS", "HOST_RAM"), optionally followed by an operation ([-|*]<float>) eg. "auto", "HOST_CPUS*.5">
    If > 0, each Linux sandbox will be limited to the given amount for the 
    specified resource. Requires --incompatible_use_new_cgroup_implementation 
    and overrides --experimental_sandbox_memory_limit_mb. Requires cgroups v1 
    or v2 and permissions for the users to the cgroups dir.
--experimental_sandbox_memory_limit_mb=<integer number of MBs, or "HOST_RAM", optionally followed by [-|*]<float>.>
    If > 0, each Linux sandbox will be limited to the given amount of memory 
    (in MB). Requires cgroups v1 or v2 and permissions for the users to the 
    cgroups dir.
    Save the state of enabled and requested feautres as an output of 
    compilation.
    Scale all timeouts in Starlark repository rules by this factor. In this 
    way, external repositories can be made working on machines that are slower 
    than the rule author expected, without changing the source code
    If enabled, could shrink worker pool if worker memory pressure is high. 
    This flag works only when flag experimental_total_worker_memory_limit_mb is 
    enabled.
    If set to true, non-main repositories are planted as symlinks to the main 
    repository in the execution root. That is, all repositories are direct 
    children of the $output_base/execution_root directory. This has the side 
    effect of freeing up $output_base/execution_root/__main__/external for the 
    real top-level 'external' directory.
    If enabled, the register_toolchain function may not include target patterns 
    which may refer to more than one package.
    For debugging Skyfocus. Dump the focused SkyKeys (roots, leafs, focused 
    deps, focused rdeps).
    For debugging Skyfocus. If enabled, trigger manual GC before/after focusing 
    to report heap sizes reductions. This will increase the Skyfocus latency.
    Strategies for Skyfocus to handle changes outside of the working set.
Enable dynamic execution by running actions locally and remotely in parallel. Bazel spawns each action locally and remotely and picks the one that completes first. If an action supports workers, the local action will be run in the persistent worker mode. To enable dynamic execution for an individual action mnemonic, use the `--internal_spawn_scheduler` and `--strategy=<mnemonic>=dynamic` flags instead.  Expands to: --internal_spawn_scheduler --spawn_strategy=dynamic
    If true, then Bazel will run coverage postprocessing for test in a new 
    spawn.
    If this flag is set, and a test action does not generate a test.xml file, 
    then Bazel uses a separate action to generate a dummy test.xml file 
    containing the test log. Otherwise, Bazel generates a test.xml as part of 
    the test action.
    If enabled, the Starlark version of cc_import can be used.
    Stream log file uploads directly to the remote storage rather than writing 
    them to disk.
    If this option is enabled, filesets will treat all output artifacts as 
    regular files. They will not traverse directories or be sensitive to 
    symlinks.
    If true, checks that a Java target explicitly declares all directly used 
    targets as dependencies.
--experimental_total_worker_memory_limit_mb=<integer number of MBs, or "HOST_RAM", optionally followed by [-|*]<float>.>
    If this limit is greater than zero idle workers might be killed if the 
    total memory usage of all  workers exceed the limit.
    The maximum size of the stdout / stderr files that will be printed to the 
    console. -1 implies no limit.
    Whether to narrow inputs to C/C++ compilation by parsing #include lines 
    from input files. This can improve performance and incrementality by 
    decreasing the size of compilation input trees. However, it can also break 
    builds because the include scanner does not fully implement C preprocessor 
    semantics. In particular, it does not understand dynamic #include 
    directives and ignores preprocessor conditional logic. Use at your own 
    risk. Any issues relating to this flag that are filed will be closed.
    If set to true, do not mount root, only mount whats provided with 
    sandbox_add_mount_pair. Input files will be hardlinked to the sandbox 
    instead of symlinked to from the sandbox. If action input files are located 
    on a filesystem different from the sandbox, then the input files will be 
    copied instead.
    If specified, Bazel will generate llvm-cov coverage map information rather 
    than gcov when collect_code_coverage is enabled.
    Please only use this flag as part of a suggested migration or testing 
    strategy. Note that the heuristic has known deficiencies and it is 
    suggested to migrate to relying on just --
    experimental_override_name_platform_in_output_dir.
    If set to true, additionally use semaphore to limit number of concurrent 
    jobs.
    Whether to run validation actions using aspect (for parallelism with tests).
    Use Windows sandbox to run actions. If "yes", the binary provided by --
    experimental_windows_sandbox_path must be valid and correspond to a 
    supported version of sandboxfs. If "auto", the binary may be missing or not 
    compatible.
    Path to the Windows sandbox binary to use when --
    experimental_use_windows_sandbox is true. If a bare name, use the first 
    binary of that name found in the PATH.
If true, experimental Windows support for --watchfs is enabled. Otherwise --watchfsis a non-op on Windows. Make sure to also enable --watchfs.
    If non-empty, only allow using persistent workers with the given worker key 
    mnemonic.
    If enabled, Bazel may send cancellation requests to workers that support 
    them.
The threading mode to use for repo fetching. If set to 'off', no worker thread is used, and the repo fetching is subject to restarts. Otherwise, uses a virtual worker thread.
--experimental_worker_memory_limit_mb=<integer number of MBs, or "HOST_RAM", optionally followed by [-|*]<float>.>
    If this limit is greater than zero, workers might be killed if the memory 
    usage of the worker exceeds the limit. If not used together with dynamic 
    execution and `--experimental_dynamic_ignore_local_signals=9`, this may 
    crash your build.
    The interval between collecting worker metrics and possibly attempting 
    evictions. Cannot effectively be less than 1s for performance reasons.
    If enabled, multiplex workers with a 'supports-multiplex-sandboxing' 
    execution requirement will run in a sandboxed environment, using a separate 
    sandbox directory per work request. Multiplex workers with the execution 
    requirement are always sandboxed when running under the dynamic execution 
    strategy, irrespective of this flag.
    If enabled, workers are run in a hardened sandbox, if the implementation 
    allows it. If hardening is enabled then tmp directories are distinct for 
    different workers.
    A worker key mnemonic for which the contents of the sandbox directory are 
    tracked in memory. This may improve build performance at the cost of 
    additional memory usage. Only affects sandboxed workers. May be specified 
    multiple times for different mnemonics.
    If enabled, actions arguments for workers that do not follow the worker 
    specification will cause an error. Worker arguments must have exactly one 
    @flagfile argument as the last of its list of arguments.
    The working set for Skyfocus. Specify as comma-separated workspace root-
    relative paths. This is a stateful flag. Defining a working set persists it 
    for subsequent invocations, until it is redefined with a new set.
Log certain Workspace Rules events into this file as delimited WorkspaceEvent protos.
    Causes the build system to explain each executed step of the build. The 
    explanation is written to the specified log file.
Explicitly specify a dependency to JUnit or Hamcrest in a java_test instead of  accidentally obtaining from the TestRunner's deps. Only works for bazel right now.
    The platforms that are available as execution platforms to run actions. 
    Platforms can be specified by exact target, or as a target pattern. These 
    platforms will be considered before those declared in the WORKSPACE file by 
    register_execution_platforms(). This option may only be set once; later 
    instances will override earlier flag settings.
    The toolchain rules to be considered during toolchain resolution. 
    Toolchains can be specified by exact target, or as a target pattern. These 
    toolchains will be considered before those declared in the WORKSPACE file 
    by register_toolchains().
    Generate binaries with FDO instrumentation. With Clang/LLVM compiler, it 
    also accepts the directory name under which the raw profile file(s) will be 
    dumped at runtime.
      Using this option will also add: --copt=-Wno-error 
    Use FDO profile information to optimize compilation. Specify the name of a 
    zip file containing a .gcda file tree, an afdo file containing an auto 
    profile, or an LLVM profile file. This flag also accepts files specified as 
    labels (e.g. `//foo/bar:file.afdo` - you may need to add an `exports_files` 
    directive to the corresponding package) and labels pointing to 
    `fdo_profile` targets. This flag will be superseded by the `fdo_profile` 
    rule.
    The fdo_profile representing the profile to be used for optimization.
    The given features will be enabled or disabled by default for targets built 
    in the target configuration. Specifying -<feature> will disable the 
    feature. Negative features always override positive ones. See also --
    host_features
Allows the command to fetch external dependencies. If set to false, the command will utilize any cached version of the dependency, and if none exists, the command will result in failure.
    Specifies which compilation modes use fission for C++ compilations and 
    links.  May be any combination of {'fastbuild', 'dbg', 'opt'} or the 
    special values 'yes'  to enable all modes and 'no' to disable all modes.
    Sets a shorthand name for a Starlark flag. It takes a single key-value pair 
    in the form "<key>=<value>" as an argument.
--flaky_test_attempts=<positive integer, the string "default", or test_regex@attempts. This flag may be passed more than once>
    Each test will be retried up to the specified number of times in case of 
    any test failure. Tests that required more than one attempt to pass are 
    marked as 'FLAKY' in the test summary. Normally the value specified is just 
    an integer or the string 'default'. If an integer, then all tests will be 
    run up to N times. If 'default', then only a single test attempt will be 
    made for regular tests and three for tests marked explicitly as flaky by 
    their rule (flaky=1 attribute). Alternate syntax: 
    regex_filter@flaky_test_attempts. Where flaky_test_attempts is as above and 
    regex_filter stands for a list of include and exclude regular expression 
    patterns (Also see --runs_per_test). Example: --flaky_test_attempts=//foo/.
    *,-//foo/bar/.*@3 deflakes all tests in //foo/ except those under foo/bar 
    three times. This option can be passed multiple times. The most recently 
    passed argument that matches takes precedence. If nothing matches, behavior 
    is as if 'default' above.
    If enabled, all C++ compilations produce position-independent code ("-
    fPIC"), links prefer PIC pre-built libraries over non-PIC libraries, and 
    links produce position-independent executables ("-pie").
    Limits which, if reached, cause GcThrashingDetector to crash Bazel with an 
    OOM. Each limit is specified as <period>:<count> where period is a duration 
    and count is a positive integer. If more than --gc_thrashing_threshold 
    percent of tenured space (old gen heap) remains occupied after <count> 
    consecutive full GCs within <period>, an OOM is triggered. Multiple limits 
    can be specified separated by commas.
    The percent of tenured space occupied (0-100) above which 
    GcThrashingDetector considers memory pressure events against its limits (--
    gc_thrashing_limits). If set to 100, GcThrashingDetector is disabled.
    If enabled, Bazel profiles the build and writes a JSON-format profile into 
    a file in the output base. View profile by loading into chrome://tracing. 
    By default Bazel writes the profile for all build-like commands and query.
    Specify how to execute genrules. This flag will be phased out. Instead, use 
    --spawn_strategy=<value> to control all actions or --
    strategy=Genrule=<value> to control genrules only.
A comma-separated list of Google Cloud authentication scopes.
Specifies the file to get authentication credentials from. See https://cloud.google.com/docs/authentication for details.
Whether to use 'Google Application Default Credentials' for authentication. See https://cloud.google.com/docs/authentication for details. Disabled by default.
Configures keep-alive pings for outgoing gRPC connections. If this is set, then Bazel sends pings after this much time of no read operations on the connection, but only if there is at least one pending gRPC call. Times are treated as second granularity; it is an error to set a value less than one second. By default, keep-alive pings are disabled. You should coordinate with the service owner before enabling this setting. For example to set a value of 30 seconds to this flag, it should be done as this --grpc_keepalive_time=30s
Configures a keep-alive timeout for outgoing gRPC connections. If keep-alive pings are enabled with --grpc_keepalive_time, then Bazel times out a connection if it does not receive a ping reply after this much time. Times are treated as second granularity; it is an error to set a value less than one second. If keep-alive pings are disabled, then this setting is ignored.
    A label to a checked-in libc library. The default value is selected by the 
    crosstool toolchain, and you almost never need to override it.
    Set this to 'full' to enable checking the ctime of all input files of an 
    action before uploading it to a remote cache. There may be cases where the 
    Linux kernel delays writing of files, which could cause false positives. 
    The default is 'lite', which only checks source files in the main 
    repository. Setting this to 'off' disables all checks. This is not 
    recommended, as the cache may be polluted when a source file is changed 
    while an action that takes it as an input is executing.
    Whether to manually output a heap dump if an OOM is thrown (including 
    manual OOMs due to reaching --gc_thrashing_limits). The dump will be 
    written to <output_base>/<invocation_id>.heapdump.hprof. This option 
    effectively replaces -XX:+HeapDumpOnOutOfMemoryError, which has no effect 
    for manual OOMs.
    If true, Blaze will remove FileState and DirectoryListingState nodes after 
    related File and DirectoryListing node is done to save memory. We expect 
    that it is less likely that these nodes will be needed again. If so, the 
    program will re-evaluate them.
    Specifies the set of environment variables available to actions with 
    execution configurations. Variables can be either specified by name, in 
    which case the value will be taken from the invocation environment, or by 
    the name=value pair which sets the value independent of the invocation 
    environment. This option can be used multiple times; for options given for 
    the same variable, the latest wins, options for different variables 
    accumulate.
    Specify the mode the tools used during the build will be built in. Values: 
    'fastbuild', 'dbg', 'opt'.
    Additional option to pass to the C compiler when compiling C (but not C++) 
    source files in the exec configurations.
    Additional options to pass to the C compiler for tools built in the exec 
    configurations.
    Additional options to pass to C++ compiler for tools built in the exec 
    configurations.
    The given features will be enabled or disabled by default for targets built 
    in the exec configuration. Specifying -<feature> will disable the feature. 
    Negative features always override positive ones.
    Overrides the Python version for the exec configuration. Can be "PY2" or 
    "PY3".
    If specified, this setting overrides the libc top-level directory (--
    grte_top) for the exec configuration.
The Java launcher used by tools that are executed during a build.
Additional options to pass to javac when building tools that are executed during a build.
Additional options to pass to the Java VM when building tools that are executed during  the build. These options will get added to the VM startup options of each  java_binary target.
    Additional option to pass to linker when linking tools in the exec 
    configurations.
    Minimum compatible macOS version for host targets. If unspecified, uses 
    'macos_sdk_version'.
--host_per_file_copt=<comma-separated list of regex expressions with prefix '-' specifying excluded paths followed by an @ and a comma separated list of options>
    Additional options to selectively pass to the C/C++ compiler when compiling 
    certain files in the exec configurations. This option can be passed 
    multiple times. Syntax: regex_filter@option_1,option_2,...,option_n. Where 
    regex_filter stands for a list of include and exclude regular expression 
    patterns (Also see --instrumentation_filter). option_1 to option_n stand 
    for arbitrary command line options. If an option contains a comma it has to 
    be quoted with a backslash. Options can contain @. Only the first @ is used 
    to split the string. Example: --host_per_file_copt=//foo/.*\.cc,-//foo/bar\.
    cc@-O0 adds the -O0 command line option to the gcc command line of all cc 
    files in //foo/ except bar.cc.
    The maximum timeout for http download retries. With a value of 0, no 
    timeout maximum is defined.
    If true, Bazel ignores `bazel_dep` and `use_extension` declared as 
    `dev_dependency` in the MODULE.bazel of the root module. Note that, those 
    dev dependencies are always ignored in the MODULE.bazel if it's not the 
    root module regardless of the value of this flag.
    Do not print a warning when sandboxed execution is not supported on this 
    system.
    If set to true, tags will be propagated from a target to the actions' 
    execution requirements; otherwise tags are not propagated. See https:
    //github.com/bazelbuild/bazel/issues/8830 for details.
    Check the validity of elements added to depsets, in all constructors. 
    Elements must be immutable, but historically the depset(direct=...) 
    constructor forgot to check. Use tuples instead of lists in depset 
    elements. See https://github.com/bazelbuild/bazel/issues/10313 for details.
    If true, native rules add <code>DefaultInfo.files</code> of data 
    dependencies to their runfiles, which matches the recommended behavior for 
    Starlark rules (https://bazel.
    build/extending/rules#runfiles_features_to_avoid).
    When enabled, an exec groups is automatically created for each toolchain 
    used by a rule. For this to work rule needs to specify `toolchain` 
    parameter on its actions. For more information, see https://github.
    com/bazelbuild/bazel/issues/17134.
    A comma-separated list of rules (or other symbols) that were previously 
    part of Bazel and which are now to be retrieved from their respective 
    external repositories. This flag is intended to be used to facilitate 
    migration of rules out of Bazel. See also https://github.
    com/bazelbuild/bazel/issues/23043.
    A symbol that is autoloaded within a file behaves as if its built-into-
    Bazel definition were replaced by its canonical new definition in an 
    external repository. For a BUILD file, this essentially means implicitly 
    adding a load() statement. For a .bzl file, it's either a load() statement 
    or a change to a field of the `native` object, depending on whether the 
    autoloaded symbol is a rule.
    Bazel maintains a hardcoded list of all symbols that may be autoloaded; 
    only those symbols may appear in this flag. For each symbol, Bazel knows 
    the new definition location in an external repository, as well as a set of 
    special-cased repositories that must not autoload it to avoid creating 
    cycles.
    A list item of "+foo" in this flag causes symbol foo to be autoloaded, 
    except in foo's exempt repositories, within which the Bazel-defined version 
    of foo is still available.
    A list item of "foo" triggers autoloading as above, but the Bazel-defined 
    version of foo is not made available to the excluded repositories. This 
    ensures that foo's external repository does not depend on the old Bazel 
    implementation of foo
    A list item of "-foo" does not trigger any autoloading, but makes the Bazel-
    defined version of foo inaccessible throughout the workspace. This is used 
    to validate that the workspace is ready for foo's definition to be deleted 
    from Bazel.
    If a symbol is not named in this flag then it continues to work as normal 
    -- no autoloading is done, nor is the Bazel-defined version suppressed. For 
    configuration see https://github.
    com/bazelbuild/bazel/blob/master/src/main/java/com/google/devtools/build/lib/packages/AutoloadSymbols.
    java As a shortcut also whole repository may be used, for example 
    +@rules_python will autoload all Python rules.
    If enabled, "bazel test --run_under=//:runner" builds "//:runner" in the 
    exec configuration. If disabled, it builds "//:runner" in the target 
    configuration. Bazel executes tests on exec machines, so the former is more 
    correct. This doesn't affect "bazel run", which always builds "`--
    run_under=//foo" in the target configuration.
    If true, Bazel will fail a sharded test if the test runner does not 
    indicate that it supports sharding by touching the file at the path in 
    TEST_SHARD_STATUS_FILE. If false, a test runner that does not support 
    sharding will lead to all tests running in each shard.
    If enabled, check testonly for prerequisite targets that are output files 
    by looking up the testonly of the generating rule. This matches visibility 
    checking.
    If enabled, visibility checking also applies to toolchain implementations.
    If enabled, the <binary>.repo_mapping file emits a module extension's repo 
    mapping only once instead of once for each repo generated by the extension 
    that contributes runfiles.
    If incompatible_enforce_config_setting_visibility=false, this is a noop. 
    Else, if this flag is false, any config_setting without an explicit 
    visibility attribute is //visibility:public. If this flag is true, 
    config_setting follows the same visibility logic as all other rules. See 
    https://github.com/bazelbuild/bazel/issues/12933.
    This flag changes the default behavior so that __init__.py files are no 
    longer automatically created in the runfiles of Python targets. Precisely, 
    when a py_binary or py_test target has legacy_create_init set to "auto" 
    (the default), it is treated as false if and only if this flag is set. See 
    https://github.com/bazelbuild/bazel/issues/10076.
    When true, Bazel no longer returns a list from linking_context.
    libraries_to_link but returns a depset instead.
    Controls if the autoloads (set by --incompatible_autoload_externally) are 
    enabled in themain repository. When enabled the rules (or other symbols) 
    that were previously part of Bazel need to have load statements. Use 
    buildifier to add them.
    If enabled, direct usage of the native Android rules is disabled. Please 
    use the Starlark Android rules from https://github.
    com/bazelbuild/rules_android
    No-op. Kept here for backwards compatibility.
    If false, native repo rules can be used in WORKSPACE; otherwise, Starlark 
    repo rules must be used instead. Native repo rules include 
    local_repository, new_local_repository, and local_config_platform. When 
    this flag is set, the local_config_platform built-in module is also 
    unavailable in Bzlmod; use `@platforms//host` instead.
    If true, java_binary is always executable. create_executable attribute is 
    removed.
    Disable objc_library's custom transition and inherit from the top level 
    target instead (No-op in Bazel)
    If set to true, rule attributes cannot set 'cfg = "host"'. Rules should set 
    'cfg = "exec"' instead.
    If set to true, disable the ability to access providers on 'target' objects 
    via field syntax. Use provider-key syntax instead. For example, instead of 
    using `ctx.attr.dep.my_info` to access `my_info` from inside a rule 
    implementation function, use `ctx.attr.dep[MyInfo]`. See https://github.
    com/bazelbuild/bazel/issues/9014 for details.
    If set to true, disable the ability to utilize the default provider via 
    field syntax. Use provider-key syntax instead. For example, instead of 
    using `ctx.attr.dep.files` to access `files`, utilize `ctx.attr.dep
    [DefaultInfo].files See https://github.com/bazelbuild/bazel/issues/9014 for 
    details.
    If set to true, calling the deprecated ctx.resolve_tools API always fails. 
    Uses of this API should be replaced by an executable or tools argument to 
    ctx.actions.run or ctx.actions.run_shell.
    If set to true, the default value of the `allow_empty` argument of glob() 
    is False.
    If true, disallow sdk_frameworks and weak_sdk_frameworks attributes in 
    objc_library andobjc_import.
    If set to true, rule implementation functions may not return a struct. They 
    must instead return a list of provider instances.
    When true, Bazel no longer modifies command line flags used for linking, 
    and also doesn't selectively decide which flags go to the param file and 
    which don't.  See https://github.com/bazelbuild/bazel/issues/7670 for 
    details.
    If true, Bazel will not enable 'host' and 'nonhost' features in the c++ 
    toolchain (see https://github.com/bazelbuild/bazel/issues/7407 for more 
    information).
    Use toolchain resolution to select the Apple SDK for apple rules (Starlark 
    and native)
    If enabled, certain deprecated APIs (native.repository_name, Label.
    workspace_name, Label.relative) can be used.
    If true, proto lang rules define toolchains from protobuf repository.
    If true, enforce config_setting visibility restrictions. If false, every 
    config_setting is visible to every target. See https://github.
    com/bazelbuild/bazel/issues/12932.
    If enabled (or set to 'error'), fail if Starlark files are not UTF-8 
    encoded. If set to 'warning', emit a warning instead. If set to 'off', 
    Bazel assumes that Starlark files are UTF-8 encoded but does not verify 
    this assumption. Note that Starlark files which are not UTF-8 encoded can 
    cause Bazel to behave inconsistently.
    If true, exclusive tests will run with sandboxed strategy. Add 'local' tag 
    to force an exclusive test run locally
    If enabled, targets that have unknown attributes set to None fail.
    If true, runfiles of targets listed in the srcs attribute are available to 
    targets that consume the filegroup as a data dependency.
    In package_group's `packages` attribute, changes the meaning of the value 
    "//..." to refer to all packages in the current repository instead of all 
    packages in any repository. You can use the special value "public" in place 
    of "//..." to obtain the old behavior. This flag requires that --
    incompatible_package_group_has_public_syntax also be enabled.
    If set to true, the output_jar, and host_javabase parameters in 
    pack_sources and host_javabase in compile will all be removed.
    If set to true, enables the legacy implicit fallback from sandboxed to 
    local strategy. This flag will eventually default to false and then become 
    a no-op. Use --strategy, --spawn_strategy, or --dynamic_local_strategy to 
    configure fallbacks instead.
    Whether a target that provides an executable expands to the executable 
    rather than the files in <code>DefaultInfo.files</code> under $(locations 
    ...) expansion if the number of files is not 1.
    This flag is a noop and scheduled for removal.
    If enabled, actions registered with ctx.actions.run and ctx.actions.
    run_shell with both 'env' and 'use_default_shell_env = True' specified will 
    use an environment obtained from the default shell environment by 
    overriding with the values passed in to 'env'. If disabled, the value of 
    'env' is completely ignored in this case.
    If true, the genfiles directory is folded into the bin directory.
    When enabled, passing multiple --modify_execution_info flags is additive. 
    When disabled, only the last flag is taken into account.
    If set to true, disables the function `attr.license`.
    If set, (used) source files are are package private unless exported 
    explicitly. See https://github.
    com/bazelbuild/proposals/blob/master/designs/2019-10-24-file-visibility.md
    If true, then methods on <code>repository_ctx</code> that are passed a 
    Label will no longer automatically watch the file under that label for 
    changes even if <code>watch = "no"</code>, and <code>repository_ctx.
    path</code> no longer causes the returned path to be watched. Use 
    <code>repository_ctx.watch</code> instead.
    If set to true, disables the `outputs` parameter of the `rule()` Starlark 
    function.
    If true, make the default value true for alwayslink attributes in 
    objc_library and objc_import.
    In package_group's `packages` attribute, allows writing "public" or 
    "private" to refer to all packages or no packages respectively.
    If true, targets built in the Python 2 configuration will appear under an 
    output root that includes the suffix '-py2', while targets built for Python 
    3 will appear in a root with no Python-related suffix. This means that the 
    `bazel-bin` convenience symlink will point to Python 3 targets rather than 
    Python 2. If you enable this option it is also recommended to enable `--
    incompatible_py3_is_default`.
    If true, `py_binary` and `py_test` targets that do not set their 
    `python_version` (or `default_python_version`) attribute will default to 
    PY3 rather than to PY2. If you set this flag it is also recommended to set 
    `--incompatible_py2_outputs_are_suffixed`.
    If true, using Python 2 settings will cause an error. This includes 
    python_version=PY2, srcs_version=PY2, and srcs_version=PY2ONLY. See https:
    //github.com/bazelbuild/bazel/issues/15684 for more information.
    When true, an error occurs when using the builtin py_* rules; instead the 
    rule_python rules should be used. See https://github.
    com/bazelbuild/bazel/issues/17773 for more information and migration 
    instructions.
    If set to true, Bazel will use new exit code 39 instead of 34 if remote 
    cacheerrors, including cache evictions, cause the build to fail.
    If true, Bazel will not link library dependencies as whole archive by 
    default (see https://github.com/bazelbuild/bazel/issues/7362 for migration 
    instructions).
    If true, <code>--action_env=NAME=VALUE</code> will no longer affect 
    repository rule and module extension environments.
    
    This flag is a noop and scheduled for removal.
    If set to true, rule create_linking_context will require linker_inputs 
    instead of libraries_to_link. The old getters of linking_context will also 
    be disabled and just linker_inputs will be available.
    If set to true, the command parameter of actions.run_shell will only accept 
    string
    If set to true, each Linux sandbox will have its own dedicated empty 
    directory mounted as /tmp rather than sharing /tmp with the host 
    filesystem. Use --sandbox_add_mount_pair=/tmp to keep seeing the host's 
    /tmp in all sandboxes.
    If true, simplify configurable rule attributes which contain only 
    unconditional selects; for example, if ["a"] + select("//conditions:
    default", ["b"]) is assigned to a rule attribute, it is stored as ["a", 
    "b"]. This option does not affect attributes of symbolic macros or 
    attribute default values.
    If set to true, deprecated ctx.build_file_path will not be available. ctx.
    label.package + '/BUILD' can be used instead.
    If enabled, certain language-specific modules (such as `cc_common`) are 
    unavailable in user .bzl files and may only be called from their respective 
    rules repositories.
    If true, Bazel uses an environment with a static value for PATH and does 
    not inherit LD_LIBRARY_PATH. Use --action_env=ENV_VARIABLE if you want to 
    inherit specific environment variables from the client, but note that doing 
    so can prevent cross-user caching if a shared cache is used.
    If true, strip action for executables will use flag -x, which does not 
    break dynamic symbol resolution.
    If set to true, the top level aspect will honor its required providers and 
    only run on top level targets whose rules' advertised providers satisfy the 
    required providers of the aspect.
    When true, Bazel will stringify the label @//foo:bar to @//foo:bar, instead 
    of //foo:bar. This only affects the behavior of str(), the % operator, and 
    so on; the behavior of repr() is unchanged. See https://github.
    com/bazelbuild/bazel/issues/15916 for more information.
    When true, Bazel will no longer allow using cc_configure from @bazel_tools. 
    Please see https://github.com/bazelbuild/bazel/issues/10134 for details and 
    migration instructions.
    If true, use the new implementation for cgroups. The old implementation 
    only supports the memory controller and ignores the value of --
    experimental_sandbox_limits.
    If set to true, executable native Python rules will use the Python runtime 
    specified by the Python toolchain, rather than the runtime given by legacy 
    flags like --python_top.
    This flag is a noop and scheduled for removal.
Adds a new repository with a local path in the form of <repository name>=<path>. This only takes effect with --enable_bzlmod and is equivalent to adding a corresponding `local_repository` to the root module's MODULE.bazel file via `use_repo_rule`. If the given path is an absolute path, it will be used as it is. If the given path is a relative path, it is relative to the current working directory. If the given path starts with '%workspace%', it is relative to the workspace root, which is the output of `bazel info workspace`. If the given path is empty, then remove any previous injections.
    When coverage is enabled, specifies whether to consider instrumenting test 
    rules. When set, test rules included by --instrumentation_filter are 
    instrumented. Otherwise, test rules are always excluded from coverage 
    instrumentation.
--instrumentation_filter=<comma-separated list of regex expressions with prefix '-' specifying excluded paths>
    When coverage is enabled, only rules with names included by the specified 
    regex-based filter will be instrumented. Rules prefixed with '-' are 
    excluded instead. Note that only non-test rules are instrumented unless --
    instrument_test_targets is enabled.
    Use interface shared objects if supported by the toolchain. All ELF 
    toolchains currently support this setting.
    Placeholder option so that we can tell in Blaze whether the spawn scheduler 
    was enabled.
    Unique identifier, in UUID format, for the command being run. If explicitly 
    specified uniqueness must be ensured by the caller. The UUID is printed to 
    stderr, the BEP and remote execution protocol.
    Minimum compatible iOS version for target simulators and devices. If 
    unspecified, uses 'ios_sdk_version'.
    Comma-separated list of architectures to build an ios_application with. The 
    result is a universal binary containing all specified architectures.
    Specifies the version of the iOS SDK to use to build iOS applications. If 
    unspecified, uses the default iOS SDK version from 'xcode_version'.
    Certificate name to use for iOS signing. If not set will fall back to 
    provisioning profile. May be the certificate's keychain identity preference 
    or (substring) of the certificate's common name, as per codesign's man page 
    (SIGNING IDENTITIES).
    The device to simulate when running an iOS application in the simulator, e.
    g. 'iPhone 6'. You can get a list of devices by running 'xcrun simctl list 
    devicetypes' on the machine the simulator will be run on.
    The version of iOS to run on the simulator when running or testing. This is 
    ignored for ios_test rules if a target device is specified in the rule.
Additional options to pass to the J2ObjC tool.
Causes the Java virtual machine of a java test to wait for a connection from a JDWP-compliant debugger (such as jdb) before starting the test. Implies -test_output=streamed.  Expands to: --test_arg=--wrapper_script_flag=--debug --  test_output=streamed --test_strategy=exclusive --test_timeout=9999 --  nocache_test_results
Generate dependency information (for now, compile-time classpath) per Java target.
The Java launcher to use when building Java binaries.  If this flag is set to the empty string, the JDK launcher is used. The "launcher" attribute overrides this flag.
--jobs=<integer, or a keyword ("auto", "HOST_CPUS", "HOST_RAM"), optionally followed by an operation ([-|*]<float>) eg. "auto", "HOST_CPUS*.5">
    The number of concurrent jobs to run. Takes an integer, or a keyword 
    ("auto", "HOST_CPUS", "HOST_RAM"), optionally followed by an operation ([-
    |*]<float>) eg. "auto", "HOST_CPUS*.5". Values must be between 1 and 5000. 
    Values above 2500 may cause memory issues. "auto" calculates a reasonable 
    default based on host resources.
Regex for overriding the matching logic for JDK21+ JVM heap memory collection. We are relying on volatile internal G1 GC implemenation details to get a clean memory metric, this option allows us to adapt to changes in that internal implementation without having to wait for a binary release.  Passed to JDK Matcher.find()
Additional options to pass to the Java VM. These options will get added to the VM startup options of each java_binary target.
    Continue as much as possible after an error.  While the target that failed 
    and those that depend on it cannot be analyzed, other prerequisites of 
    these targets can be.
    If false, Blaze will discard the inmemory state from this build when the 
    build finishes. Subsequent builds will not have any incrementality with 
    respect to this one.
    If true, build runfiles symlink forests for external repositories under .
    runfiles/wsname/external/repo (in addition to .runfiles/repo).
    Use this to suppress generation of the legacy important_outputs field in 
    the TargetComplete event. important_outputs are required for Bazel to 
    ResultStore/BTX integration.
Specifies a binary to use to generate the list of classes that must be in the main dex when compiling legacy multidex.
    Deprecated, superseded by --incompatible_remove_legacy_whole_archive (see 
    https://github.com/bazelbuild/bazel/issues/7362 for details). When on, use 
    --whole-archive for cc_binary rules that have linkshared=True and either 
    linkstatic=True or '-static' in linkopts. This is for backwards 
    compatibility only. A better alternative is to use alwayslink=1 where 
    required.
--loading_phase_threads=<integer, or a keyword ("auto", "HOST_CPUS", "HOST_RAM"), optionally followed by an operation ([-|*]<float>) eg. "auto", "HOST_CPUS*.5">
    Number of parallel threads to use for the loading/analysis phase.Takes an 
    integer, or a keyword ("auto", "HOST_CPUS", "HOST_RAM"), optionally 
    followed by an operation ([-|*]<float>) eg. "auto", "HOST_CPUS*.5". "auto" 
    sets a reasonable default based on host resources. Must be at least 1.
    Explicitly set the total number of local CPU cores available to Bazel to 
    spend on build actions executed locally. Takes an integer, or "HOST_CPUS", 
    optionally followed by [-|*]<float> (eg. HOST_CPUS*.5 to use half the 
    available CPU cores). By default, ("HOST_CPUS"), Bazel will query system 
    configuration to estimate the number of CPU cores available.
    Set the number of extra resources available to Bazel. Takes in a string-
    float pair. Can be used multiple times to specify multiple types of extra 
    resources. Bazel will limit concurrently running actions based on the 
    available extra resources and the extra resources required. Tests can 
    declare the amount of extra resources they need by using a tag of the 
    "resources:<resoucename>:<amount>" format. Available CPU, RAM and resources 
    cannot be set with this flag.
    Explicitly set the total amount of local host RAM (in MB) available to 
    Bazel to spend on build actions executed locally. Takes an integer, or 
    "HOST_RAM", optionally followed by [-|*]<float> (eg. HOST_RAM*.5 to use 
    half the available RAM). By default, ("HOST_RAM*.67"), Bazel will query 
    system configuration to estimate the amount of RAM available and will use 
    67% of it.
--local_resources=<named double, 'name=value', where value is an integer, or a keyword ("auto", "HOST_CPUS", "HOST_RAM"), optionally followed by an operation ([-|*]<float>) eg. "auto", "HOST_CPUS*.5">
    Set the number of resources available to Bazel. Takes in an assignment to a 
    float or HOST_RAM/HOST_CPUS, optionally followed by [-|*]<float> (eg. 
    memory=HOST_RAM*.5 to use half the available RAM). Can be used multiple 
    times to specify multiple types of resources. Bazel will limit concurrently 
    running actions based on the available resources and the resources 
    required. Tests can declare the amount of resources they need by using a 
    tag of the "resources:<resource name>:<amount>" format. Overrides resources 
    specified by --local_{cpu|ram|extra}_resources.
Time to wait between terminating a local process due to timeout and forcefully shutting it down.
--local_test_jobs=<integer, or a keyword ("auto", "HOST_CPUS", "HOST_RAM"), optionally followed by an operation ([-|*]<float>) eg. "auto", "HOST_CPUS*.5">
    The max number of local test jobs to run concurrently. Takes an integer, or 
    a keyword ("auto", "HOST_CPUS", "HOST_RAM"), optionally followed by an 
    operation ([-|*]<float>) eg. "auto", "HOST_CPUS*.5". 0 means local 
    resources will limit the number of local test jobs to run concurrently 
    instead. Setting this greater than the value for --jobs is ineffectual.
    Specifies how and whether or not to use the lockfile. Valid values are 
    `update` to use the lockfile and update it if there are changes, `refresh` 
    to additionally refresh mutable information (yanked versions and previously 
    missing modules) from remote registries from time to time, `error` to use 
    the lockfile but throw an error if it's not up-to-date, or `off` to neither 
    read from or write to the lockfile.
    Additional option to pass to the LTO backend step (under --
    features=thin_lto).
    Additional option to pass to the LTO indexing step (under --
    features=thin_lto).
    Comma-separated list of architectures for which to build Apple macOS 
    binaries.
    Minimum compatible macOS version for targets. If unspecified, uses 
    'macos_sdk_version'.
    Specifies the version of the macOS SDK to use to build macOS applications. 
    If unspecified, uses the default macOS SDK version from 'xcode_version'.
    Writes intermediate parameter files to output tree even when using remote 
    action execution or caching. Useful when debugging actions. This is implied 
    by --subcommands and --verbose_failures.
    The maximum number of Starlark computation steps that may be executed by a 
    BUILD file (zero means no limit).
    When discarding the analysis cache due to a change in the build options, 
    displays up to the given number of changed option names. If the number 
    given is -1, all changed options will be displayed.
    Specifies maximum per-test-log size that can be emitted when --test_output 
    is 'errors' or 'all'. Useful for avoiding overwhelming the output with 
    excessively noisy test output. The test header is included in the log size. 
    Negative values imply no limit. Output is all or nothing.
    If set, write memory usage data to the specified file at phase ends and 
    stable heap to master log at end of build.
    Tune memory profile's computation of stable heap at end of build. Should be 
    and even number of  integers separated by commas. In each pair the first 
    integer is the number of GCs to perform. The second integer in each pair is 
    the number of seconds to wait between GCs. Ex: 2,4,4,0 would 2 GCs with a 
    4sec pause, followed by 4 GCs with zero second pause
    Add or remove keys from an action's execution info based on action 
    mnemonic.  Applies only to actions which support execution info. Many 
    common actions support execution info, e.g. Genrule, CppCompile, Javac, 
    StarlarkAction, TestRunner. When specifying multiple values, order matters 
    because many regexes may apply to the same mnemonic.
    
    Syntax: "regex=[+-]key,regex=[+-]key,...".
    
    Examples:
      '.*=+x,.*=-y,.*=+z' adds 'x' and 'z' to, and removes 'y' from, the 
    execution info for all actions.
      'Genrule=+requires-x' adds 'requires-x' to the execution info for all 
    Genrule actions.
      '(?!Genrule).*=-requires-x' removes 'requires-x' from the execution info 
    for all non-Genrule actions.
    
    A comma-separated list of URLs under which the source URLs of Bazel modules 
    can be found,
    in addition to and taking precedence over any registry-provided mirror 
    URLs. Set this to
    an empty value to disable the use of any mirrors not specified by the 
    registries. The
    default set of mirrors may change over time, but all downloads from mirrors 
    are verified
    by hashes stored in the registry (and thus pinned by the lockfile).
    
    The maximum depth of the graph internal to a depset (also known as 
    NestedSet), above which the depset() constructor will fail.
    If set, and compilation mode is set to 'dbg', define GLIBCXX_DEBUG,  
    GLIBCXX_DEBUG_PEDANTIC and GLIBCPP_CONCEPT_CHECKS.
    Whether to perform symbol and dead-code strippings on linked binaries. 
    Binary strippings will be performed if both this flag and --
    compilation_mode=opt are specified.
    If set, .d files emitted by clang will be used to prune the set of inputs 
    passed into objc compiles.
    When enabled, and with experimental_one_version_enforcement set to a non-
    NONE value, enforce one version on java_test targets. This flag can be 
    disabled to improve incremental test performance at the expense of missing 
    potential one version violations.
    Only shows warnings and action outputs for rules with a name matching the 
    provided regular expression.
    A list of comma-separated output group names, each of which optionally 
    prefixed by a + or a -. A group prefixed by + is added to the default set 
    of output groups, while a group prefixed by - is removed from the default 
    set. If at least one group is not prefixed, the default set of output 
    groups is omitted. For example, --output_groups=+foo,+bar builds the union 
    of the default set, foo, and bar, while --output_groups=foo,bar overrides 
    the default set such that only foo and bar are built.
Override a module with a local path in the form of <module name>=<path>. If the given path is an absolute path, it will be used as it is. If the given path is a relative path, it is relative to the current working directory. If the given path starts with '%workspace%, it is relative to the workspace root, which is the output of `bazel info workspace`. If the given path is empty, then remove any previous overrides.
Override a repository with a local path in the form of <repository name>=<path>. If the given path is an absolute path, it will be used as it is. If the given path is a relative path, it is relative to the current working directory. If the given path starts with '%workspace%, it is relative to the workspace root, which is the output of `bazel info workspace`. If the given path is empty, then remove any previous overrides.
A colon-separated list of where to look for packages. Elements beginning with '%workspace%' are relative to the enclosing workspace. If omitted or empty, the default is the output of 'bazel info default-package-path'.
--per_file_copt=<comma-separated list of regex expressions with prefix '-' specifying excluded paths followed by an @ and a comma separated list of options>
    Additional options to selectively pass to gcc when compiling certain files. 
    This option can be passed multiple times. Syntax: regex_filter@option_1,
    option_2,...,option_n. Where regex_filter stands for a list of include and 
    exclude regular expression patterns (Also see --instrumentation_filter). 
    option_1 to option_n stand for arbitrary command line options. If an option 
    contains a comma it has to be quoted with a backslash. Options can contain 
    @. Only the first @ is used to split the string. Example: --
    per_file_copt=//foo/.*\.cc,-//foo/bar\.cc@-O0 adds the -O0 command line 
    option to the gcc command line of all cc files in //foo/ except bar.cc.
--per_file_ltobackendopt=<comma-separated list of regex expressions with prefix '-' specifying excluded paths followed by an @ and a comma separated list of options>
    Additional options to selectively pass to LTO backend (under --
    features=thin_lto) when compiling certain backend objects. This option can 
    be passed multiple times. Syntax: regex_filter@option_1,option_2,...,
    option_n. Where regex_filter stands for a list of include and exclude 
    regular expression patterns. option_1 to option_n stand for arbitrary 
    command line options. If an option contains a comma it has to be quoted 
    with a backslash. Options can contain @. Only the first @ is used to split 
    the string. Example: --per_file_ltobackendopt=//foo/.*\.o,-//foo/bar\.o@-O0 
    adds the -O0 command line option to the LTO backend command line of all o 
    files in //foo/ except bar.o.
    Enable persistent Android dex and desugar actions by using workers.
      Expands to: --internal_persistent_android_dex_desugar --
      strategy=Desugar=worker --strategy=DexBuilder=worker 
    Enable persistent Android resource processor by using workers.
      Expands to: --internal_persistent_busybox_tools --
      strategy=AaptPackage=worker --strategy=AndroidResourceParser=worker --
      strategy=AndroidResourceValidator=worker --
      strategy=AndroidResourceCompiler=worker --strategy=RClassGenerator=worker 
      --strategy=AndroidResourceLink=worker --strategy=AndroidAapt2=worker --
      strategy=AndroidAssetMerger=worker --
      strategy=AndroidResourceMerger=worker --
      strategy=AndroidCompiledResourceMerger=worker --
      strategy=ManifestMerger=worker --strategy=AndroidManifestMerger=worker --
      strategy=Aapt2Optimize=worker --strategy=AARGenerator=worker --
      strategy=ProcessDatabinding=worker --
      strategy=GenerateDataBindingBaseClasses=worker 
    Enable persistent multiplexed Android dex and desugar actions by using 
    workers.
      Expands to: --persistent_android_dex_desugar --
      internal_persistent_multiplex_android_dex_desugar 
    Enable persistent multiplexed Android resource processor by using workers.
      Expands to: --persistent_android_resource_processor --
      modify_execution_info=AaptPackage=+supports-multiplex-workers --
      modify_execution_info=AndroidResourceParser=+supports-multiplex-workers --
      modify_execution_info=AndroidResourceValidator=+supports-multiplex-
      workers --modify_execution_info=AndroidResourceCompiler=+supports-
      multiplex-workers --modify_execution_info=RClassGenerator=+supports-
      multiplex-workers --modify_execution_info=AndroidResourceLink=+supports-
      multiplex-workers --modify_execution_info=AndroidAapt2=+supports-
      multiplex-workers --modify_execution_info=AndroidAssetMerger=+supports-
      multiplex-workers --modify_execution_info=AndroidResourceMerger=+supports-
      multiplex-workers --
      modify_execution_info=AndroidCompiledResourceMerger=+supports-multiplex-
      workers --modify_execution_info=ManifestMerger=+supports-multiplex-
      workers --modify_execution_info=AndroidManifestMerger=+supports-multiplex-
      workers --modify_execution_info=Aapt2Optimize=+supports-multiplex-workers 
      --modify_execution_info=AARGenerator=+supports-multiplex-workers 
    Enable persistent and multiplexed Android tools (dexing, desugaring, 
    resource processing).
      Expands to: --internal_persistent_multiplex_busybox_tools --
      persistent_multiplex_android_resource_processor --
      persistent_multiplex_android_dex_desugar 
    The location of a mapping file that describes which platform to use if none 
    is set or which flags to set when a platform already exists. Must be 
    relative to the main workspace root. Defaults to 'platform_mappings' (a 
    file directly under the workspace root).
    The labels of the platform rules describing the target platforms for the 
    current command.
Lists which mnemonics to filter print_action data by, no filtering takes place when left empty.
    When building a target //a:a, process headers in all targets that //a:a 
    depends on (if header processing is enabled for the toolchain).
    If set, profile Bazel and write data to the specified file. Use bazel 
    analyze-profile to analyze the profile.
    Number of profiles to retain in the output base. If there are more than 
    this number of profiles in the output base, the oldest are deleted until 
    the total is under the limit.
Show the command progress in the terminal title. Useful to see what bazel is doing when having multiple terminal tabs.
    The number of seconds to wait between reports on still running jobs. The 
    default value 0 means the first report will be printed after 10 seconds, 
    then 30 seconds and after that progress is reported once every minute. When 
    --curses is enabled, progress is reported every second.
Specifies which version of ProGuard to use for code removal when building a Java binary.
    Use Propeller profile information to optimize the build target.A propeller 
    profile must consist of at least one of two files, a cc profile and a ld 
    profile.  This flag accepts a build label which must refer to the propeller 
    profile input files. For example, the BUILD file that defines the label, in 
    a/b/BUILD:propeller_optimize(    name = "propeller_profile",    cc_profile 
    = "propeller_cc_profile.txt",    ld_profile = "propeller_ld_profile.txt",)
    An exports_files directive may have to be added to the corresponding 
    package to make these files visible to Bazel. The option must be used as: --
    propeller_optimize=//a/b:propeller_profile
    Absolute path name of cc_profile file for Propeller Optimized builds.
    Absolute path name of ld_profile file for Propeller Optimized builds.
    The profile to pass to the proto compiler as profile_path. If unset, but  --
    proto_profile is true (the default), infers the path from --fdo_optimize.
    Label of proto_lang_toolchain() which describes how to compile C++ protos
    Label of proto_lang_toolchain() which describes how to compile j2objc protos
    Label of proto_lang_toolchain() which describes how to compile Java protos
    Label of proto_lang_toolchain() which describes how to compile JavaLite 
    protos
    An allowlist (package_group target) to use when enforcing --
    incompatible_python_disallow_native_rules.
    The absolute path of the Python interpreter invoked to run Python targets 
    on the target platform. Deprecated; disabled by --
    incompatible_use_python_toolchains.
    The label of a py_runtime representing the Python interpreter invoked to 
    run Python targets on the target platform. Deprecated; disabled by --
    incompatible_use_python_toolchains.
    The Python major version mode, either `PY2` or `PY3`. Note that this is 
    overridden by `py_binary` and `py_test` targets (even if they don't 
    explicitly specify a version) so there is usually not much reason to supply 
    this flag.
    By default, Bazel profiler will record only aggregated data for fast but 
    numerous events (such as statting the file). If this option is enabled, 
    profiler will record each event - resulting in more precise profiling data 
    but LARGE performance hit. Option only has effect if --profile used as well.
    If true and supported, instrumentation output is redirected to be written 
    locally on a different machine than where bazel is running on.
    Specifies the registries to use to locate Bazel module dependencies. The 
    order is important: modules will be looked up in earlier registries first, 
    and only fall back to later registries when they're missing from the 
    earlier ones.
If set to 'all', all local outputs referenced by BEP are uploaded to remote cache.If set to 'minimal', local outputs referenced by BEP are not uploaded to the remote cache, except for files that are important to the consumers of BEP (e.g. test logs and timing profile). bytestream:// scheme is always used for the uri of files even if they are missing from remote cache.Default to 'minimal'.
The hostname and instance name to be used in bytestream:// URIs that are written into build event streams. This option can be set when builds are performed using a proxy, which causes the values of --remote_executor and --remote_instance_name to no longer correspond to the canonical name of the remote execution service. When not set, it will default to "${hostname}/${instance_name}".
A URI of a caching endpoint. The supported schemas are http, https, grpc, grpcs (grpc with TLS enabled) and unix (local UNIX sockets). If no schema is provided Bazel will default to grpcs. Specify grpc://, http:// or unix: schema to disable TLS. See https://bazel.build/remote/caching
If true, uploading of action results to a disk or remote cache will happen in the background instead of blocking the completion of an action. Some actions are incompatible with background uploads, and may still block even when this flag is set.
If enabled, compress/decompress cache blobs with zstd when their size is at least --experimental_remote_cache_compression_threshold.
Specify a header that will be included in cache requests: --remote_cache_header=Name=Value. Multiple headers can be passed by specifying the flag multiple times. Multiple values for the same name will be converted to a comma-separated list.
    Set the default exec properties to be used as the remote execution platform 
    if an execution platform does not already set exec_properties.
Set the default platform properties to be set for the remote execution API, if the execution platform does not already set remote_execution_properties. This value will also be used if the host platform is selected as the execution platform for remote execution.
    Downloads all remote outputs to the local machine. This flag is an alias 
    for --remote_download_outputs=all.
      Expands to: --remote_download_outputs=all 
    Does not download any remote build outputs to the local machine. This flag 
    is an alias for --remote_download_outputs=minimal.
      Expands to: --remote_download_outputs=minimal 
    If set to 'minimal' doesn't download any remote build outputs to the local 
    machine, except the ones required by local actions. If set to 'toplevel' 
    behaves like'minimal' except that it also downloads outputs of top level 
    targets to the local machine. Both options can significantly reduce build 
    times if network bandwidth is a bottleneck.
    Force remote build outputs whose path matches this pattern to be 
    downloaded, irrespective of --remote_download_outputs. Multiple patterns 
    may be specified by repeating this flag.
    Instead of downloading remote build outputs to the local machine, create 
    symbolic links. The target of the symbolic links can be specified in the 
    form of a template string. This template string may contain {hash} and 
    {size_bytes} that expand to the hash of the object and the size in bytes, 
    respectively. These symbolic links may, for example, point to a FUSE file 
    system that loads objects from the CAS on demand.
    Only downloads remote outputs of top level targets to the local machine. 
    This flag is an alias for --remote_download_outputs=toplevel.
      Expands to: --remote_download_outputs=toplevel 
Specify a header that will be included in remote downloader requests: --remote_downloader_header=Name=Value. Multiple headers can be passed by specifying the flag multiple times. Multiple values for the same name will be converted to a comma-separated list.
Specify a header that will be included in execution requests: --remote_exec_header=Name=Value. Multiple headers can be passed by specifying the flag multiple times. Multiple values for the same name will be converted to a comma-separated list.
The relative priority of actions to be executed remotely. The semantics of the particular priority values are server-dependent.
HOST or HOST:PORT of a remote execution endpoint. The supported schemas are grpc, grpcs (grpc with TLS enabled) and unix (local UNIX sockets). If no schema is provided Bazel will default to grpcs. Specify grpc:// or unix: schema to disable TLS.
If specified, a path to a file to log gRPC call related details. This log consists of a sequence of serialized com.google.devtools.build.lib.remote.logging.RemoteExecutionLog.LogEntry protobufs with each message prefixed by a varint denoting the size of the following serialized protobuf message, as performed by the method LogEntry.writeDelimitedTo(OutputStream).
Specify a header that will be included in requests: --remote_header=Name=Value. Multiple headers can be passed by specifying the flag multiple times. Multiple values for the same name will be converted to a comma-separated list.
Whether to fall back to standalone local execution strategy if remote execution fails.
Deprecated. See https://github.com/bazelbuild/bazel/issues/7480 for details.
    Limit the max number of concurrent connections to remote cache/executor. By 
    default the value is 100. Setting this to 0 means no limitation.
    For HTTP remote cache, one TCP connection could handle one request at one 
    time, so Bazel could make up to --remote_max_connections concurrent 
    requests.
    For gRPC remote cache/executor, one gRPC channel could usually handle 100+ 
    concurrent requests, so Bazel could make around `--remote_max_connections * 
    100` concurrent requests.
    Choose when to print remote execution messages. Valid values are `failure`, 
    to print only on failures, `success` to print only on successes and `all` 
    to print always.
Connect to the remote cache through a proxy. Currently this flag can only be used to configure a Unix domain socket (unix:/path/to/socket).
The relative priority of remote actions to be stored in remote cache. The semantics of the particular priority values are server-dependent.
The maximum number of attempts to retry a transient error. If set to 0, retries are disabled.
The maximum backoff delay between remote retry attempts. Following units can be used: Days (d), hours (h), minutes (m), seconds (s), and milliseconds (ms). If the unit is omitted, the value is interpreted as seconds.
The maximum amount of time to wait for remote execution and cache calls. For the REST cache, this is both the connect and the read timeout. Following units can be used: Days (d), hours (h), minutes (m), seconds (s), and milliseconds (ms). If the unit is omitted, the value is interpreted as seconds.
Whether to upload locally executed action results to the remote cache if the remote cache supports it and the user is authorized to do so.
If set to true, Bazel will compute the hash sum of all remote downloads and  discard the remotely cached values if they don't match the expected value.
    Specifies the location of the repo contents cache, which contains fetched 
    repo directories shareable across workspaces. An empty string as argument 
    requests the repo contents cache to be disabled.
    
    Specifies the amount of time the server must remain idle before garbage 
    collection happens
    to the repo contents cache.
    
    Specifies the amount of time an entry in the repo contents cache can stay 
    unused before it's garbage collected. If set to zero, garbage collection is 
    disabled.
    
    Specifies additional environment variables to be available only for 
    repository rules. Note that repository rules see the full environment 
    anyway, but in this way configuration information can be passed to 
    repositories through options without invalidating the action graph.
    
    A list of additional repositories (beyond the hardcoded ones Bazel knows 
    about) where autoloads are not to be added. This should typically contain 
    repositories that are transitively depended on by a repository that may be 
    loaded automatically (and which can therefore potentially create a cycle).
    Specifies the cache location of the downloaded values obtained during the 
    fetching of external repositories. An empty string as argument requests the 
    cache to be disabled, otherwise the default of '<--
    output_user_root>/cache/repos/v1' is used
    If set, downloading using ctx.download{,_and_extract} is not allowed during 
    repository fetching. Note that network access is not completely disabled; 
    ctx.execute could still run an arbitrary executable that accesses the 
    Internet.
    If set to true, directories used by sandboxed non-worker execution may be 
    reused to avoid unnecessary setup costs.
    Prefix to insert before the executables for the 'test' and 'run' commands. 
    If the value is 'foo -bar', and the execution command line is 'test_binary -
    baz', then the final command line is 'foo -bar test_binary -baz'.This can 
    also be a label to an executable target. Some examples are: 'valgrind', 
    'strace', 'strace -c', 'valgrind --quiet --num-callers=20', '//package:
    target',  '//package:target --options'.
    Whether to run validation actions as part of the build. See https://bazel.
    build/extending/rules#validation_actions
Specifies number of times to run each test. If any of those attempts fail for any reason, the whole test is considered failed. Normally the value specified is just an integer. Example: --runs_per_test=3 will run all tests 3 times. Alternate syntax: regex_filter@runs_per_test. Where runs_per_test stands for an integer value and regex_filter stands for a list of include and exclude regular expression patterns (Also see --instrumentation_filter). Example: --runs_per_test=//foo/.*,-//foo/bar/.*@3 runs all tests in //foo/ except those under foo/bar three times. This option can be passed multiple times. The most recently passed argument that matches takes precedence. If nothing matches, the test is only run once.
If true, any shard in which at least one run/attempt passes and at least one run/attempt fails gets a FLAKY status.
    Add additional path pair to mount in sandbox.
    Lets the sandbox create its sandbox directories underneath this path. 
    Specify a path on tmpfs (like /run/shm) to possibly improve performance a 
    lot when your build / tests have many input files. Note: You need enough 
    RAM and free space on the tmpfs to hold output and intermediate files 
    generated by running actions.
    Enables debugging features for the sandboxing feature. This includes two 
    things: first, the sandbox root contents are left untouched after a build; 
    and second, prints extra debugging information on execution. This can help 
    developers of Bazel or Starlark rules with debugging failures due to 
    missing input files, etc.
    Allow network access by default for actions; this may not work with all 
    sandboxing implementations.
    Explicitly enable the creation of pseudoterminals for sandboxed actions. 
    Some linux distributions require setting the group id of the process to 
    'tty' inside the sandbox in order for pseudoterminals to function. If this 
    is causing issues, this flag can be disabled to enable other groups to be 
    used.
    Change the current hostname to 'localhost' for sandboxed actions.
    Change the current username to 'nobody' for sandboxed actions.
    For sandboxed actions, mount an empty, writable directory at this absolute 
    path (if supported by the sandboxing implementation, ignored otherwise).
    For sandboxed actions, make an existing directory writable in the sandbox 
    (if supported by the sandboxing implementation, ignored otherwise).
    If set, temporary outputs from gcc will be saved.  These include .s files 
    (assembler code), .i files (preprocessed C) and .ii files (preprocessed 
    C++).
    Dump a profile of serialized frontier bytes. Specifies the output path.
    If true, native libraries that contain identical functionality will be 
    shared among different targets
    Absolute path to the shell executable for Bazel to use. If this is unset, 
    but the BAZEL_SH environment variable is set on the first Bazel invocation 
    (that starts up a Bazel server), Bazel uses that. If neither is set, Bazel 
    uses a hard-coded default path depending on the operating system it runs on 
    (Windows: c:/msys64/usr/bin/bash.exe, FreeBSD: /usr/local/bin/bash, all 
    others: /bin/bash). Note that using a shell that is not compatible with 
    bash may lead to build failures or runtime failures of the generated 
    binaries.
If enabled, causes Bazel to print "Loading package:" messages.
Minimum number of seconds between progress messages in the output.
    Show the results of the build.  For each target, state whether or not it 
    was brought up-to-date, and if so, a list of output files that were built.  
    The printed files are convenient strings for copy+pasting to the shell, to 
    execute them.
    This option requires an integer argument, which is the threshold number of 
    targets above which result information is not printed. Thus zero causes 
    suppression of the message and MAX_INT causes printing of the result to 
    occur always. The default is one.
    If nothing was built for a target its results may be omitted to keep the 
    output under the threshold.
    Skip incompatible targets that are explicitly listed on the command line. 
    By default, building such targets results in an error but they are silently 
    skipped when this option is enabled. See: https://bazel.
    build/extending/platforms#skipping-incompatible-targets
    Flag for advanced configuration of Bazel's internal Skyframe engine. If 
    Bazel detects its retained heap percentage usage exceeds the threshold set 
    by --skyframe_high_water_mark_threshold, when a full GC event occurs, it 
    will drop unnecessary temporary Skyframe state, up to this many times per 
    invocation. Defaults to 10. Zero means that full GC events will never 
    trigger drops. If the limit is reached, Skyframe state will no longer be 
    dropped when a full GC event occurs and that retained heap percentage 
    threshold is exceeded.
    Flag for advanced configuration of Bazel's internal Skyframe engine. If 
    Bazel detects its retained heap percentage usage exceeds the threshold set 
    by --skyframe_high_water_mark_threshold, when a minor GC event occurs, it 
    will drop unnecessary temporary Skyframe state, up to this many times per 
    invocation. Defaults to 10. Zero means that minor GC events will never 
    trigger drops. If the limit is reached, Skyframe state will no longer be 
    dropped when a minor GC event occurs and that retained heap percentage 
    threshold is exceeded.
    Flag for advanced configuration of Bazel's internal Skyframe engine. If 
    Bazel detects its retained heap percentage usage is at least this 
    threshold, it will drop unnecessary temporary Skyframe state. Tweaking this 
    may let you mitigate wall time impact of GC thrashing, when the GC 
    thrashing is (i) caused by the memory usage of this temporary state and 
    (ii) more costly than reconstituting the state when it is needed.
    Slims down the size of the JSON profile by merging events if the profile 
    gets  too large.
    Specify how spawn actions are executed by default. Accepts a comma-
    separated list of strategies from highest to lowest priority. For each 
    action Bazel picks the strategy with the highest priority that can execute 
    the action. The default value is "remote,worker,sandboxed,local". See https:
    //blog.bazel.build/2019/06/19/list-strategy.html for details.
    Stamp binaries with the date, username, hostname, workspace information, 
    etc.
    Writes into the specified file a pprof profile of CPU usage by all Starlark 
    threads.
    Specify how to distribute compilation of other spawn actions. Accepts a 
    comma-separated list of strategies from highest to lowest priority. For 
    each action Bazel picks the strategy with the highest priority that can 
    execute the action. The default value is "remote,worker,sandboxed,local". 
    This flag overrides the values set by --spawn_strategy (and --
    genrule_strategy if used with mnemonic Genrule). See https://blog.bazel.
    build/2019/06/19/list-strategy.html for details.
    Override which spawn strategy should be used to execute spawn actions that 
    have descriptions matching a certain regex_filter. See --per_file_copt for 
    details onregex_filter matching. The last regex_filter that matches the 
    description is used. This option overrides other flags for specifying 
    strategy. Example: --strategy_regexp=//foo.*\.cc,-//foo/bar=local means to 
    run actions using local strategy if their descriptions match //foo.*.cc but 
    not //foo/bar. Example: --strategy_regexp='Compiling.*/bar=local  --
    strategy_regexp=Compiling=sandboxed will run 'Compiling //foo/bar/baz' with 
    the 'local' strategy, but reversing the order would run it with 
    'sandboxed'. 
    If this option is enabled, filesets crossing package boundaries are 
    reported as errors.
    Unless OFF, checks that a proto_library target explicitly declares all 
    directly used targets as dependencies.
    Unless OFF, checks that a proto_library target explicitly declares all 
    targets used in 'import public' as exported.
    If true, headers found through system include paths (-isystem) are also 
    required to be declared.
    Specifies whether to strip binaries and shared libraries  (using "-Wl,--
    strip-debug").  The default value of 'sometimes' means strip iff --
    compilation_mode=fastbuild.
    Display the subcommands executed during a build. Related flags: --
    execution_log_json_file, --execution_log_binary_file (for logging 
    subcommands to a file in a tool-friendly format).
    The prefix that is prepended to any of the convenience symlinks that are 
    created after a build. If omitted, the default value is the name of the 
    build tool followed by a hyphen. If '/' is passed, then no symlinks are 
    created and no warning is emitted. Warning: the special functionality for 
    '/' will be deprecated soon; use --experimental_convenience_symlinks=ignore 
    instead.
    Declares this build's target environment. Must be a label reference to an 
    "environment" rule. If specified, all top-level targets must be compatible 
    with this environment.
    If set, build will read patterns from the file named here, rather than on 
    the command line. It is an error to specify a file here as well as command-
    line patterns.
Specifies additional options and arguments that should be passed to the test executable. Can be used multiple times to specify several arguments. If multiple tests are executed, each of them will receive identical arguments. Used only by the 'bazel test' command.
    Specifies additional environment variables to be injected into the test 
    runner environment. Variables can be either specified by name, in which 
    case its value will be read from the Bazel client environment, or by the 
    name=value pair. This option can be used multiple times to specify several 
    variables. Used only by the 'bazel test' command.
Specifies a filter to forward to the test framework.  Used to limit the tests run. Note that this does not affect which targets are built.
    When disabled, any non-passing test will cause the entire build to stop. By 
    default all tests are run, even if some do not pass.
Specifies a comma-separated list of test languages. Each language can be optionally preceded with '-' to specify excluded languages. Only those test targets will be found that are written in the specified languages. The name used for each language should be the same as the language prefix in the *_test rule, e.g. one of 'cc', 'java', 'py', etc. This option affects --build_tests_only behavior and the test command.
    Specifies desired output mode. Valid values are 'summary' to output only 
    test status summary, 'errors' to also print test logs for failed tests, 
    'all' to print logs for all tests and 'streamed' to output logs for all 
    tests in real time (this will force tests to be executed locally one at a 
    time regardless of --test_strategy value).
Forwards fail fast option to the test runner. The test runner should stop execution upon first failure.
--test_sharding_strategy=<explicit, disabled or forced=k where k is the number of shards to enforce>
Specify strategy for test sharding: 'explicit' to only use sharding if the 'shard_count' BUILD attribute is present. 'disabled' to never use test sharding. 'forced=k' to enforce 'k' shards for testing regardless of the 'shard_count' BUILD attribute.
Specifies a comma-separated list of test sizes. Each size can be optionally preceded with '-' to specify excluded sizes. Only those test targets will be found that contain at least one included size and do not contain any excluded sizes. This option affects --build_tests_only behavior and the test command.
    Specifies the desired format of the test summary. Valid values are 'short' 
    to print information only about tests executed, 'terse', to print 
    information only about unsuccessful tests that were run, 'detailed' to 
    print detailed information about failed test cases, 'testcase' to print 
    summary in test case resolution, do not print detailed information about 
    failed test cases and 'none' to omit the summary.
Specifies a comma-separated list of test tags. Each tag can be optionally preceded with '-' to specify excluded tags. Only those test targets will be found that contain at least one included tag and do not contain any excluded tags. This option affects --build_tests_only behavior and the test command.
Override the default test timeout values for test timeouts (in secs). If a single positive integer value is specified it will override all categories.  If 4 comma-separated integers are specified, they will override the timeouts for short, moderate, long and eternal (in that order). In either form, a value of -1 tells blaze to use its default timeouts for that category.
Specifies a comma-separated list of test timeouts. Each timeout can be optionally preceded with '-' to specify excluded timeouts. Only those test targets will be found that contain at least one included timeout and do not contain any excluded timeouts. This option affects --build_tests_only behavior and the test command.
Specify a path to a TLS certificate that is trusted to sign server certificates.
Specify the TLS client certificate to use; you also need to provide a client key to enable client authentication.
Specify the TLS client key to use; you also need to provide a client certificate to enable client authentication.
The Java language version used to execute the tools that are needed during a build
--toolchain_resolution_debug=<comma-separated list of regex expressions with prefix '-' specifying excluded paths>
    Print debug information during toolchain resolution. The flag takes a 
    regex, which is checked against toolchain types and specific targets to see 
    which to debug. Multiple regexes may be  separated by commas, and then each 
    regex is checked separately. Note: The output of this flag is very complex 
    and will likely only be useful to experts in toolchain resolution.
    If false, Blaze will not persist data that allows for invalidation and re-
    evaluation on incremental builds in order to save memory on this build. 
    Subsequent builds will not have any incrementality with respect to this 
    one. Usually you will want to specify --batch when setting this to false.
    When enabled, test-related options will be cleared below the top level of 
    the build. When this flag is active, tests cannot be built as dependencies 
    of non-test rules, but changes to test-related options will not cause non-
    test rules to be re-analyzed.
    Comma-separated list of architectures for which to build Apple tvOS 
    binaries.
    Minimum compatible tvOS version for target simulators and devices. If 
    unspecified, uses 'tvos_sdk_version'.
    Specifies the version of the tvOS SDK to use to build tvOS applications. If 
    unspecified, uses the default tvOS SDK version from 'xcode_version'.
    Number of concurrent actions shown in the detailed progress bar; each 
    action is shown on a separate line. The progress bar always shows at least 
    one one, all numbers less than 1 are mapped to 1.
    Specifies which events to show in the UI. It is possible to add or remove 
    events to the default ones using leading +/-, or override the default set 
    completely with direct assignment. The set of supported event kinds include 
    INFO, DEBUG, ERROR and more.
If enabled, this option causes Java compilation to use interface jars. This will result in faster incremental compilation, but error messages can be different.
    If true, then Bazel will use the target platform for running tests rather 
    than the test exec group.
    Specifies the directory that should hold the external repositories in 
    vendor mode, whether for the purpose of fetching them into it or using them 
    while building. The path can be specified as either an absolute path or a 
    path relative to the workspace directory.
    Increases the verbosity of the explanations issued if --explain is enabled. 
    Has no effect if --explain is not enabled.
    Comma-separated list of architectures for which to build Apple visionOS 
    binaries.
On Linux/macOS: If true, bazel tries to use the operating system's file watch service for local changes instead of scanning every file for a change. On Windows: this flag currently is a non-op but can be enabled in conjunction with --experimental_windows_watchfs. On any OS: The behavior is undefined if your workspace is on a network file system, and files are edited on a remote machine.
    Comma-separated list of architectures for which to build Apple watchOS 
    binaries.
    Minimum compatible watchOS version for target simulators and devices. If 
    unspecified, uses 'watchos_sdk_version'.
    Specifies the version of the watchOS SDK to use to build watchOS 
    applications. If unspecified, uses the default watchOS SDK version from 
    'xcode_version'.
    Extra command-flags that will be passed to worker processes in addition to 
    --persistent_worker, keyed by mnemonic (e.g. --worker_extra_flag=Javac=--
    debug.
--worker_max_instances=<[name=]value, where value is an integer, or a keyword ("auto", "HOST_CPUS", "HOST_RAM"), optionally followed by an operation ([-|*]<float>) eg. "auto", "HOST_CPUS*.5">
    How many instances of each kind of persistent worker may be launched if you 
    use the 'worker' strategy. May be specified as [name=value] to give a 
    different value per mnemonic. The limit is based on worker keys, which are 
    differentiated based on mnemonic, but also on startup flags and 
    environment, so there can in some cases be more workers per mnemonic than 
    this flag specifies. Takes an integer, or a keyword ("auto", "HOST_CPUS", 
    "HOST_RAM"), optionally followed by an operation ([-|*]<float>) eg. "auto", 
    "HOST_CPUS*.5". 'auto' calculates a reasonable default based on machine 
    capacity. "=value" sets a default for unspecified mnemonics.
--worker_max_multiplex_instances=<[name=]value, where value is an integer, or a keyword ("auto", "HOST_CPUS", "HOST_RAM"), optionally followed by an operation ([-|*]<float>) eg. "auto", "HOST_CPUS*.5">
    How many WorkRequests a multiplex worker process may receive in parallel if 
    you use the 'worker' strategy with --worker_multiplex. May be specified as 
    [name=value] to give a different value per mnemonic. The limit is based on 
    worker keys, which are differentiated based on mnemonic, but also on 
    startup flags and environment, so there can in some cases be more workers 
    per mnemonic than this flag specifies. Takes an integer, or a keyword 
    ("auto", "HOST_CPUS", "HOST_RAM"), optionally followed by an operation ([-
    |*]<float>) eg. "auto", "HOST_CPUS*.5". 'auto' calculates a reasonable 
    default based on machine capacity. "=value" sets a default for unspecified 
    mnemonics.
    If enabled, singleplex workers will run in a sandboxed environment. 
    Singleplex workers are always sandboxed when running under the dynamic 
    execution strategy, irrespective of this flag.
If enabled, prints verbose messages when workers are started, shutdown, ...
A command invoked at the beginning of the build to provide status information about the workspace in the form of key/value pairs.  See the User's Manual for the full specification. Also see tools/buildstamp/get_workspace_status for an example.
    Use XbinaryFDO profile information to optimize compilation. Specify the 
    name of default cross binary profile. When the option is used together with 
    --fdo_instrument/--fdo_optimize/--fdo_profile, those options will always 
    prevail as if xbinary_fdo is never specified. 
    If specified, uses Xcode of the given version for relevant build actions. 
    If unspecified, uses the executor default version of Xcode.
    The label of the xcode_config rule to be used for selecting the Xcode 
    version in the build configuration.