Specified the module versions in the form of `<module1>@<version1>,
<module2>@<version2>` that will be allowed in the resolved dependency graph
even if they are declared yanked in the registry where they come from (if
they are not coming from a NonRegistryOverride). Otherwise, yanked versions
will cause the resolution to fail. You can also define allowed yanked
version with the `BZLMOD_ALLOW_YANKED_VERSIONS` environment variable. You
can disable this check by using the keyword 'all' (not recommended).
How to resolve aspect dependencies when the output format is one of {xml,
proto,record}. 'off' means no aspect dependencies are resolved,
'conservative' (the default) means all declared aspect dependencies are
added regardless of whether they are given the rule class of direct
dependencies, 'precise' means that only those aspects are added that are
possibly active given the rule class of the direct dependencies. Note that
precise mode requires loading other packages to evaluate a single target
thus making it slower than the other modes. Also note that even precise
mode is not completely precise: the decision whether to compute an aspect
is decided in the analysis phase, which is not run during 'bazel query'.
When printing the location part of messages, attempt to use a path relative
to the workspace directory or one of the directories specified by --
package_path.
Maximum number of open files allowed during BEP artifact upload.
Specifies the build event service (BES) backend endpoint in the form
[SCHEME://]HOST[:PORT]. The default is to disable BES uploads. Supported
schemes are grpc and grpcs (grpc with TLS enabled). If no scheme is
provided, Bazel assumes grpcs.
Sets the field check_preceding_lifecycle_events_present on
PublishBuildToolEventStreamRequest which tells BES to check whether it
previously received InvocationAttemptStarted and BuildEnqueued events
matching the current tool event.
Specify a header in NAME=VALUE form that will be included in BES requests.
Multiple headers can be passed by specifying the flag multiple times.
Multiple values for the same name will be converted to a comma-separated
list.
Specifies the instance name under which the BES will persist uploaded BEP.
Defaults to null.
Specifies a list of notification keywords to be added the default set of
keywords published to BES ("command_name=<command_name> ",
"protocol_name=BEP"). Defaults to none.
Specifies whether to publish BES lifecycle events. (defaults to 'true').
Specifies how long bazel should wait for the BES/BEP upload to complete
while OOMing. This flag ensures termination when the JVM is severely GC
thrashing and cannot make progress on any user thread.
Specifies the maximal size of stdout or stderr to be buffered in BEP,
before it is reported as a progress event. Individual writes are still
reported in a single event, even if larger than the specified value up to --
bes_outerr_chunk_size.
Specifies the maximal size of stdout or stderr to be sent to BEP in a
single message.
Connect to the Build Event Service through a proxy. Currently this flag can only be used to configure a Unix domain socket (unix:/path/to/socket).
Specifies the base URL where a user can view the information streamed to
the BES backend. Bazel will output the URL appended by the invocation id to
the terminal.
Specifies a list of notification keywords to be included directly, without
the "user_keyword=" prefix included for keywords supplied via --
bes_keywords. Intended for Build service operators that set --
bes_lifecycle_events=false and include keywords when calling
PublishLifecycleEvent. Build service operators using this flag should
prevent users from overriding the flag value.
Specifies how long bazel should wait for the BES/BEP upload to complete
after the build and tests have finished. A valid timeout is a natural
number followed by a unit: Days (d), hours (h), minutes (m), seconds (s),
and milliseconds (ms). The default value is '0' which means that there is
no timeout.
Specifies whether the Build Event Service upload should block the build
completion or should end the invocation immediately and finish the upload
in the background. Either 'wait_for_upload_complete' (default),
'nowait_for_upload_complete', or 'fully_async'.
If non-empty, write a varint delimited binary representation of
representation of the build event protocol to that file. This option
implies --bes_upload_mode=wait_for_upload_complete.
Convert paths in the binary file representation of the build event protocol
to more globally valid URIs whenever possible; if disabled, the file:// uri
scheme will always be used
--build_event_binary_file_upload_mode=<wait_for_upload_complete, nowait_for_upload_complete or fully_async>
Specifies whether the Build Event Service upload for --
build_event_binary_file should block the build completion or should end the
invocation immediately and finish the upload in the background. Either
'wait_for_upload_complete' (default), 'nowait_for_upload_complete', or
'fully_async'.
If non-empty, write a JSON serialisation of the build event protocol to
that file. This option implies --bes_upload_mode=wait_for_upload_complete.
Convert paths in the json file representation of the build event protocol
to more globally valid URIs whenever possible; if disabled, the file:// uri
scheme will always be used
--build_event_json_file_upload_mode=<wait_for_upload_complete, nowait_for_upload_complete or fully_async>
Specifies whether the Build Event Service upload for --
build_event_json_file should block the build completion or should end the
invocation immediately and finish the upload in the background. Either
'wait_for_upload_complete' (default), 'nowait_for_upload_complete', or
'fully_async'.
The maximum number of entries for a single named_set_of_files event; values
smaller than 2 are ignored and no event splitting is performed. This is
intended for limiting the maximum event size in the build event protocol,
although it does not directly control event size. The total event size is a
function of the structure of the set as well as the file and uri lengths,
which may in turn depend on the hash function.
If non-empty, write a textual representation of the build event protocol to
that file
Convert paths in the text file representation of the build event protocol
to more globally valid URIs whenever possible; if disabled, the file:// uri
scheme will always be used
--build_event_text_file_upload_mode=<wait_for_upload_complete, nowait_for_upload_complete or fully_async>
Specifies whether the Build Event Service upload for --
build_event_text_file should block the build completion or should end the
invocation immediately and finish the upload in the background. Either
'wait_for_upload_complete' (default), 'nowait_for_upload_complete', or
'fully_async'.
Custom key-value string pairs to supply in a build event.
Check bazel version compatibility of Bazel modules. Valid values are
`error` to escalate it to a resolution failure, `off` to disable the check,
or `warning` to print a warning when mismatch detected.
If disabled, .bzl load visibility errors are demoted to warnings.
Check if the direct `bazel_dep` dependencies declared in the root module
are the same versions you get in the resolved dependency graph. Valid
values are `off` to disable the check, `warning` to print a warning when
mismatch detected or `error` to escalate it to a resolution failure.
Selects additional config sections from the rc files; for every <command>, it also pulls in the options from <command>:<config> if such a section exists; if this section doesn't exist in any .rc file, Blaze fails with an error. The config sections and flag combinations they are equivalent to are located in the tools/*.blazerc config files.
If enabled, every query command emits labels as if by the Starlark
<code>str</code> function applied to a <code>Label</code> instance. This is
useful for tools that need to match the output of different query commands
and/or labels emitted by rules. If not enabled, output formatters are free
to emit apparent repository names (relative to the main repository) instead
to make the output more readable.
--credential_helper=<Path to a credential helper. It may be absolute, relative to the PATH environment variable, or %workspace%-relative. The path be optionally prefixed by a scope followed by an '='. The scope is a domain name, optionally with a single leading '*' wildcard component. A helper applies to URIs matching its scope, with more specific scopes preferred. If a helper has no scope, it applies to every URI.>
Configures a credential helper conforming to the <a href="https://github.com/EngFlow/credential-helper-spec">Credential Helper Specification</a> to use for retrieving authorization credentials for repository fetching, remote caching and execution, and the build event service.Credentials supplied by a helper take precedence over credentials supplied by `--google_default_credentials`, `--google_credentials`, a `.netrc` file, or the auth parameter to `repository_ctx.download()` and `repository_ctx.download_and_extract()`.May be specified multiple times to set up multiple helpers.See https://blog.engflow.com/2023/10/09/configuring-bazels-credential-helper/ for instructions.
The default duration for which credentials supplied by a credential helper are cached if the helper does not provide when the credentials expire.
Configures the timeout for a credential helper.Credential helpers failing to respond within this timeout will fail the invocation.
A comma-separated list of names of packages which the build system will consider non-existent, even if they are visible somewhere on the package path.Use this option when deleting a subpackage 'x/y' of an existing package 'x'. For example, after deleting x/y/BUILD in your client, the build system may complain if it encounters a label '//x:y/z' if that is still provided by another package_path entry. Specifying --deleted_packages x/y avoids this problem.
A path to a directory where Bazel can read and write actions and action outputs. If the directory does not exist, it will be created.
Additional places to search for archives before accessing the network to
download them.
If true, enables the Bzlmod dependency management system, taking precedence
over WORKSPACE. See https://bazel.build/docs/bzlmod for more information.
If true, Bazel picks up host-OS-specific config lines from bazelrc files. For example, if the host OS is Linux and you run bazel build, Bazel picks up lines starting with build:linux. Supported OS identifiers are linux, macos, windows, freebsd, and openbsd. Enabling this flag is equivalent to using --config=linux on Linux, --config=windows on Windows, etc.
If true, enables the legacy WORKSPACE system for external dependencies. See
https://bazel.build/external/overview for more information.
If set to true, ctx.actions.run() and ctx.actions.run_shell() accept a
resource_set parameter for local execution. Otherwise it will default to
250 MB for memory and 1 cpu.
If enabled, adds the JSON profile path to the log.
If true, expand Filesets in the BEP when presenting output files.
If true, fully resolve relative Fileset symlinks in the BEP when presenting
output files. Requires --experimental_build_event_expand_filesets.
The maximum number of times Bazel should retry uploading a build event.
Initial, minimum delay for exponential backoff retries when BEP upload
fails. (exponent: 1.6)
Selects how to upload artifacts referenced in the build event protocol.
If enabled, adds a `visibility()` function that .bzl files may call during
top-level evaluation to set their visibility for the purpose of load()
statements.
If set to true, rule attributes and Starlark API methods needed for the
rule cc_shared_library will be available
If set to true, rule attributes and Starlark API methods needed for the
rule cc_static_library will be available
Specifies the strategy for the circuit breaker to use. Available strategies
are "failure". On invalid value for the option the behavior same as the
option is not set.
If enabled, the profiler collects the system's overall load average.
If enabled, the profiler collects the Linux PSI data.
If enabled, the profiler collects CPU and memory usage estimation for local
actions.
If enabled, the profiler collects the system's network usage.
If enabled, the profiler collects worker's aggregated resource data.
Records a Java Flight Recorder profile for the duration of the command. One of the supported profiling event types (cpu, wall, alloc or lock) must be given as an argument. The profile is written to a file named after the event type under the output base directory. The syntax and semantics of this flag might change in the future to support additional profile types or output formats; use at your own risk.
If set to true, the auto-generated //external package will not be available
anymore. Bazel will still be unable to parse the file 'external/BUILD', but
globs reaching into external/ from the unnamed package will work.
How long the server must remain idle before a garbage collection of the disk cache occurs. To specify the garbage collection policy, set --experimental_disk_cache_gc_max_size and/or --experimental_disk_cache_gc_max_age.
If set to a positive value, the disk cache will be periodically garbage collected to remove entries older than this age. If set in conjunction with --experimental_disk_cache_gc_max_size, both criteria are applied. Garbage collection occurrs in the background once the server has become idle, as determined by the --experimental_disk_cache_gc_idle_delay flag.
--experimental_disk_cache_gc_max_size=<size in bytes, optionally followed by a K, M, G or T multiplier>
If set to a positive value, the disk cache will be periodically garbage collected to stay under this size. If set in conjunction with --experimental_disk_cache_gc_max_age, both criteria are applied. Garbage collection occurrs in the background once the server has become idle, as determined by the --experimental_disk_cache_gc_idle_delay flag.
Specify a file to configure the remote downloader with. This file consists of lines, each of which starts with a directive (`allow`, `block` or `rewrite`) followed by either a host name (for `allow` and `block`) or two patterns, one to match against, and one to use as a substitute URL, with back-references starting from `$1`. It is possible for multiple `rewrite` directives for the same URL to be give, and in this case multiple URLs will be returned.
If set to true, enables the APIs required to support the Android Starlark
migration.
If set to true, .scl files may be used in load() statements.
aquery, cquery: whether to include aspect-generated actions in the output.
query: no-op (aspects are always followed).
If set to true, exposes a number of experimental pieces of Starlark build
API pertaining to Google legacy code.
If true, uses a Query implementation that does not make a copy of the
graph. The new implementation only supports --order_output=no, as well as
only a subset of output formatters.
Turn this off to disable checking the ctime of input files of an action before uploading it to a remote cache. There may be cases where the Linux kernel delays writing of files, which could cause false positives.
If true, enables the <code>isolate</code> parameter in the <a href="https:
//bazel.build/rules/lib/globals/module#use_extension"
><code>use_extension</code></a> function.
If enabled, experimental_java_library_export_do_not_use module is available.
If set to true, enables a number of platform-related Starlark APIs useful
for debugging.
--experimental_profile_additional_tasks=<phase, action, action_check, action_lock, action_release, action_update, action_complete, bzlmod, info, create_package, remote_execution, local_execution, scanner, local_parse, upload_time, remote_process_time, remote_queue, remote_setup, fetch, local_process_time, vfs_stat, vfs_dir, vfs_readlink, vfs_md5, vfs_xattr, vfs_delete, vfs_open, vfs_read, vfs_write, vfs_glob, vfs_vmfs_stat, vfs_vmfs_dir, vfs_vmfs_read, wait, thread_name, thread_sort_index, skyframe_eval, skyfunction, critical_path, critical_path_component, handle_gc_notification, action_counts, action_cache_counts, local_cpu_usage, system_cpu_usage, cpu_usage_estimation, local_memory_usage, system_memory_usage, memory_usage_estimation, system_network_up_usage, system_network_down_usage, workers_memory_usage, system_load_average, starlark_parser, starlark_user_fn, starlark_builtin_fn, starlark_user_compiled_fn, starlark_repository_fn, action_fs_staging, remote_cache_check, remote_download, remote_network, filesystem_traversal, worker_execution, worker_setup, worker_borrow, worker_working, worker_copying_outputs, credential_helper, pressure_stall_io, pressure_stall_memory, conflict_check, dynamic_lock, repository_fetch, repository_vendor or unknown>
Specifies additional profile tasks to be included in the profile.
Includes the extra "out" attribute in action events that contains the exec
path to the action's primary output.
Includes target label in action events' JSON profile data.
By default the number of action types is limited to the 20 mnemonics with the largest number of executed actions. Setting this option will write statistics for all mnemonics.
If true, remote cache I/O will happen in the background instead of taking place as the part of a spawn.
The minimum blob size required to compress/decompress with zstd. Ineffectual unless --remote_cache_compression is set.
If set to true, Bazel will extend the lease for outputs of remote actions during the build by sending `FindMissingBlobs` calls periodically to remote cache. The frequency is based on the value of `--experimental_remote_cache_ttl`.
The guaranteed minimal TTL of blobs in the remote cache after their digests
are recently referenced e.g. by an ActionResult or FindMissingBlobs. Bazel
does several optimizations based on the blobs' TTL e.g. doesn't repeatedly
call GetActionResult in an incremental build. The value should be set
slightly less than the real TTL since there is a gap between when the
server returns the digests and when Bazel receives them.
A path to a directory where the corrupted outputs will be captured to.
If set to true, discard in-memory copies of the input root's Merkle tree and associated input mappings during calls to GetActionResult() and Execute(). This reduces memory usage significantly, but does require Bazel to recompute them upon remote cache misses and retries.
A Remote Asset API endpoint URI, to be used as a remote download proxy. The supported schemas are grpc, grpcs (grpc with TLS enabled) and unix (local UNIX sockets). If no schema is provided Bazel will default to grpcs. See: https://github.com/bazelbuild/remote-apis/blob/master/build/bazel/remote/asset/v1/remote_asset.proto
Whether to fall back to the local downloader if remote downloader fails.
Whether to use keepalive for remote execution calls.
Sets the allowed number of failure rate in percentage for a specific time
window after which it stops calling to the remote cache/executor. By
default the value is 10. Setting this to 0 means no limitation.
The interval in which the failure rate of the remote requests are computed.
On zero or negative value the failure duration is computed the whole
duration of the execution.Following units can be used: Days (d), hours (h),
minutes (m), seconds (s), and milliseconds (ms). If the unit is omitted,
the value is interpreted as seconds.
If set to true, Bazel will mark inputs as tool inputs for the remote executor. This can be used to implement remote persistent workers.
If set to true, Merkle tree calculations will be memoized to improve the remote cache hit checking speed. The memory foot print of the cache is controlled by --experimental_remote_merkle_tree_cache_size.
The number of Merkle trees to memoize to improve the remote cache hit checking speed. Even though the cache is automatically pruned according to Java's handling of soft references, out-of-memory errors can occur if set too high. If set to 0 the cache size is unlimited. Optimal value varies depending on project's size. Default to 1000.
HOST or HOST:PORT of a remote output service endpoint. The supported schemas are grpc, grpcs (grpc with TLS enabled) and unix (local UNIX sockets). If no schema is provided Bazel will default to grpcs. Specify grpc:// or unix: schema to disable TLS.
The path under which the contents of output directories managed by the --experimental_remote_output_service are placed. The actual output directory used by a build will be a descendant of this path and determined by the output service.
If set to true, enforce that all actions that can run remotely are cached, or else fail the build. This is useful to troubleshoot non-determinism issues as it allows checking whether actions that should be cached are actually cached without spuriously injecting new results into the cache.
Enables remote cache key scrubbing with the supplied configuration file, which must be a protocol buffer in text format (see src/main/protobuf/remote_scrubbing.proto).This feature is intended to facilitate sharing a remote/disk cache between actions executing on different platforms but targeting the same platform. It should be used with extreme care, as improper settings may cause accidental sharing of cache entries and result in incorrect builds.Scrubbing does not affect how an action is executed, only how its remote/disk cache key is computed for the purpose of retrieving or storing an action result. Scrubbed actions are incompatible with remote execution, and will always be executed locally instead.Modifying the scrubbing configuration does not invalidate outputs present in the local filesystem or internal caches; a clean build is required to reexecute affected actions.In order to successfully use this feature, you likely want to set a custom --host_platform together with --experimental_platform_in_output_dir (to normalize output prefixes) and --incompatible_strict_action_env (to normalize environment variables).
If set to true, repository_rule gains some remote execution capabilities.
If set, the repository cache will hardlink the file in case of a cache hit,
rather than copying. This is intended to save disk space.
The maximum number of attempts to retry a download error. If set to 0,
retries are disabled.
If non-empty, write a Starlark value with the resolved information of all
Starlark repository rules that were executed.
If non-empty read the specified resolved file instead of the WORKSPACE file
Enable experimental rule extension API and subrule APIs
Whether to include the command-line residue in run build events which could
contain the residue. By default, the residue is not included in run command
build events that could contain the residue.
Scale all timeouts in Starlark repository rules by this factor. In this
way, external repositories can be made working on machines that are slower
than the rule author expected, without changing the source code
If set to true, non-main repositories are planted as symlinks to the main
repository in the execution root. That is, all repositories are direct
children of the $output_base/execution_root directory. This has the side
effect of freeing up $output_base/execution_root/__main__/external for the
real top-level 'external' directory.
Stream log file uploads directly to the remote storage rather than writing
them to disk.
The maximum size of the stdout / stderr files that will be printed to the
console. -1 implies no limit.
If true, experimental Windows support for --watchfs is enabled. Otherwise --watchfsis a non-op on Windows. Make sure to also enable --watchfs.
The threading mode to use for repo fetching. If set to 'off', no worker thread is used, and the repo fetching is subject to restarts. Otherwise, uses a virtual worker thread.
Log certain Workspace Rules events into this file as delimited WorkspaceEvent protos.
Allows the command to fetch external dependencies. If set to false, the command will utilize any cached version of the dependency, and if none exists, the command will result in failure.
Limits which, if reached, cause GcThrashingDetector to crash Bazel with an
OOM. Each limit is specified as <period>:<count> where period is a duration
and count is a positive integer. If more than --gc_thrashing_threshold
percent of tenured space (old gen heap) remains occupied after <count>
consecutive full GCs within <period>, an OOM is triggered. Multiple limits
can be specified separated by commas.
The percent of tenured space occupied (0-100) above which
GcThrashingDetector considers memory pressure events against its limits (--
gc_thrashing_limits). If set to 100, GcThrashingDetector is disabled.
If enabled, Bazel profiles the build and writes a JSON-format profile into
a file in the output base. View profile by loading into chrome://tracing.
By default Bazel writes the profile for all build-like commands and query.
A comma-separated list of Google Cloud authentication scopes.
Specifies the file to get authentication credentials from. See https://cloud.google.com/docs/authentication for details.
Whether to use 'Google Application Default Credentials' for authentication. See https://cloud.google.com/docs/authentication for details. Disabled by default.
The maximum number of condition labels to show. -1 means no truncation and
0 means no annotation. This option is only applicable to --output=graph.
If true, then the graph will be emitted 'factored', i.e. topologically-
equivalent nodes will be merged together and their labels concatenated.
This option is only applicable to --output=graph.
The maximum length of the label string for a graph node in the output.
Longer labels will be truncated; -1 means no truncation. This option is
only applicable to --output=graph.
Configures keep-alive pings for outgoing gRPC connections. If this is set, then Bazel sends pings after this much time of no read operations on the connection, but only if there is at least one pending gRPC call. Times are treated as second granularity; it is an error to set a value less than one second. By default, keep-alive pings are disabled. You should coordinate with the service owner before enabling this setting. For example to set a value of 30 seconds to this flag, it should be done as this --grpc_keepalive_time=30s
Configures a keep-alive timeout for outgoing gRPC connections. If keep-alive pings are enabled with --grpc_keepalive_time, then Bazel times out a connection if it does not receive a ping reply after this much time. Times are treated as second granularity; it is an error to set a value less than one second. If keep-alive pings are disabled, then this setting is ignored.
Whether to manually output a heap dump if an OOM is thrown (including
manual OOMs due to reaching --gc_thrashing_limits). The dump will be
written to <output_base>/<invocation_id>.heapdump.hprof. This option
effectively replaces -XX:+HeapDumpOnOutOfMemoryError, which has no effect
for manual OOMs.
If true, Blaze will remove FileState and DirectoryListingState nodes after
related File and DirectoryListing node is done to save memory. We expect
that it is less likely that these nodes will be needed again. If so, the
program will re-evaluate them.
The maximum timeout for http download retries. With a value of 0, no
timeout maximum is defined.
If true, Bazel ignores `bazel_dep` and `use_extension` declared as
`dev_dependency` in the MODULE.bazel of the root module. Note that, those
dev dependencies are always ignored in the MODULE.bazel if it's not the
root module regardless of the value of this flag.
If enabled, implicit dependencies will be included in the dependency graph
over which the query operates. An implicit dependency is one that is not
explicitly specified in the BUILD file but added by bazel. For cquery, this
option controls filtering resolved toolchains.
aquery, cquery: whether to include aspect-generated actions in the output.
query: no-op (aspects are always followed).
If set to true, tags will be propagated from a target to the actions'
execution requirements; otherwise tags are not propagated. See https:
//github.com/bazelbuild/bazel/issues/8830 for details.
Check the validity of elements added to depsets, in all constructors.
Elements must be immutable, but historically the depset(direct=...)
constructor forgot to check. Use tuples instead of lists in depset
elements. See https://github.com/bazelbuild/bazel/issues/10313 for details.
A comma-separated list of rules (or other symbols) that were previously
part of Bazel and which are now to be retrieved from their respective
external repositories. This flag is intended to be used to facilitate
migration of rules out of Bazel. See also https://github.
com/bazelbuild/bazel/issues/23043.
A symbol that is autoloaded within a file behaves as if its built-into-
Bazel definition were replaced by its canonical new definition in an
external repository. For a BUILD file, this essentially means implicitly
adding a load() statement. For a .bzl file, it's either a load() statement
or a change to a field of the `native` object, depending on whether the
autoloaded symbol is a rule.
Bazel maintains a hardcoded list of all symbols that may be autoloaded;
only those symbols may appear in this flag. For each symbol, Bazel knows
the new definition location in an external repository, as well as a set of
special-cased repositories that must not autoload it to avoid creating
cycles.
A list item of "+foo" in this flag causes symbol foo to be autoloaded,
except in foo's exempt repositories, within which the Bazel-defined version
of foo is still available.
A list item of "foo" triggers autoloading as above, but the Bazel-defined
version of foo is not made available to the excluded repositories. This
ensures that foo's external repository does not depend on the old Bazel
implementation of foo
A list item of "-foo" does not trigger any autoloading, but makes the Bazel-
defined version of foo inaccessible throughout the workspace. This is used
to validate that the workspace is ready for foo's definition to be deleted
from Bazel.
If a symbol is not named in this flag then it continues to work as normal
-- no autoloading is done, nor is the Bazel-defined version suppressed. For
configuration see https://github.
com/bazelbuild/bazel/blob/master/src/main/java/com/google/devtools/build/lib/packages/AutoloadSymbols.
java As a shortcut also whole repository may be used, for example
+@rules_python will autoload all Python rules.
If incompatible_enforce_config_setting_visibility=false, this is a noop.
Else, if this flag is false, any config_setting without an explicit
visibility attribute is //visibility:public. If this flag is true,
config_setting follows the same visibility logic as all other rules. See
https://github.com/bazelbuild/bazel/issues/12933.
When true, Bazel no longer returns a list from java_info.java_output[0].
source_jars but returns a depset instead.
When true, Bazel no longer returns a list from linking_context.
libraries_to_link but returns a depset instead.
If false, native repo rules can be used in WORKSPACE; otherwise, Starlark
repo rules must be used instead. Native repo rules include
local_repository, new_local_repository, local_config_platform,
android_sdk_repository, and android_ndk_repository.
If true, java_binary is always executable. create_executable attribute is
removed.
Disable objc_library's custom transition and inherit from the top level
target instead
If set to true, rule attributes cannot set 'cfg = "host"'. Rules should set
'cfg = "exec"' instead.
If set to true, disable the ability to access providers on 'target' objects
via field syntax. Use provider-key syntax instead. For example, instead of
using `ctx.attr.dep.my_info` to access `my_info` from inside a rule
implementation function, use `ctx.attr.dep[MyInfo]`. See https://github.
com/bazelbuild/bazel/issues/9014 for details.
If set to true, the default value of the `allow_empty` argument of glob()
is False.
If set to true, rule implementation functions may not return a struct. They
must instead return a list of provider instances.
When true, Bazel no longer modifies command line flags used for linking,
and also doesn't selectively decide which flags go to the param file and
which don't. See https://github.com/bazelbuild/bazel/issues/7670 for
details.
If enabled, certain deprecated APIs (native.repository_name, Label.
workspace_name, Label.relative) can be used.
If true, proto lang rules define toolchains from rules_proto, rules_java,
rules_cc repositories.
If true, enforce config_setting visibility restrictions. If false, every
config_setting is visible to every target. See https://github.
com/bazelbuild/bazel/issues/12932.
If set to true, native.existing_rule and native.existing_rules return
lightweight immutable view objects instead of mutable dicts.
If enabled, targets that have unknown attributes set to None fail.
In package_group's `packages` attribute, changes the meaning of the value
"//..." to refer to all packages in the current repository instead of all
packages in any repository. You can use the special value "public" in place
of "//..." to obtain the old behavior. This flag requires that --
incompatible_package_group_has_public_syntax also be enabled.
If set to true, the output_jar, and host_javabase parameters in
pack_sources and host_javabase in compile will all be removed.
If this option is set, sorts --order_output=auto output in lexicographical
order.
If enabled, actions registered with ctx.actions.run and ctx.actions.
run_shell with both 'env' and 'use_default_shell_env = True' specified will
use an environment obtained from the default shell environment by
overriding with the values passed in to 'env'. If disabled, the value of
'env' is completely ignored in this case.
If set to true, the API to create actions is only available on `ctx.
actions`, not on `ctx`.
If set to true, disables the function `attr.license`.
If set, (used) source files are are package private unless exported
explicitly. See https://github.
com/bazelbuild/proposals/blob/master/designs/2019-10-24-file-visibility.md
If true, then methods on <code>repository_ctx</code> that are passed a
Label will no longer automatically watch the file under that label for
changes even if <code>watch = "no"</code>, and <code>repository_ctx.
path</code> no longer causes the returned path to be watched. Use
<code>repository_ctx.watch</code> instead.
If set to true, disables the `outputs` parameter of the `rule()` Starlark
function.
If set to true, the ObjcProvider's APIs for linking info will be removed.
In package_group's `packages` attribute, allows writing "public" or
"private" to refer to all packages or no packages respectively.
If enabled, when outputting package_group's `packages` attribute, the
leading `//` will not be omitted.
Deprecated. No-op. Use --remote_build_event_upload=minimal instead.
If set to true, symlinks uploaded to a remote or disk cache are allowed to
dangle.
Whether to send all values of a multi-valued header to the remote
downloader instead of just the first.
If set to true, output paths are relative to input root instead of working
directory.
If set to true, Bazel will always upload symlinks as such to a remote or
disk cache. Otherwise, non-dangling relative symlinks (and only those) will
be uploaded as the file or directory they point to.
If set to true, rule create_linking_context will require linker_inputs
instead of libraries_to_link. The old getters of linking_context will also
be disabled and just linker_inputs will be available.
If set to true, the command parameter of actions.run_shell will only accept
string
If enabled, certain language-specific modules (such as `cc_common`) are
unavailable in user .bzl files and may only be called from their respective
rules repositories.
Disables the to_json and to_proto methods of struct, which pollute the
struct field namespace. Instead, use json.encode or json.encode_indent for
JSON, or proto.encode_text for textproto.
If set to true, the top level aspect will honor its required providers and
only run on top level targets whose rules' advertised providers satisfy the
required providers of the aspect.
When true, Bazel will stringify the label @//foo:bar to @//foo:bar, instead
of //foo:bar. This only affects the behavior of str(), the % operator, and
so on; the behavior of repr() is unchanged. See https://github.
com/bazelbuild/bazel/issues/15916 for more information.
When true, Bazel will no longer allow using cc_configure from @bazel_tools.
Please see https://github.com/bazelbuild/bazel/issues/10134 for details and
migration instructions.
If true, uses the plus sign (+) as the separator in canonical repo names,
instead of the tilde (~). This is to address severe performance issues on
Windows; see https://github.com/bazelbuild/bazel/issues/22865 for more
information.
If set to true, the visibility of private rule attributes is checked with
respect to the rule definition, falling back to rule usage if not visible.
If set and --universe_scope is unset, then a value of --universe_scope will
be inferred as the list of unique target patterns in the query expression.
Note that the --universe_scope value inferred for a query expression that
uses universe-scoped functions (e.g.`allrdeps`) may not be what you want,
so you should use this option only if you know what you are doing. See
https://bazel.build/reference/query#sky-query for details and examples. If
--universe_scope is set, then this option's value is ignored. Note: this
option applies only to `query` (i.e. not `cquery`).
Unique identifier, in UUID format, for the command being run. If explicitly
specified uniqueness must be ensured by the caller. The UUID is printed to
stderr, the BEP and remote execution protocol.
Continue as much as possible after an error. While the target that failed
and those that depend on it cannot be analyzed, other prerequisites of
these targets can be.
If false, Blaze will discard the inmemory state from this build when the
build finishes. Subsequent builds will not have any incrementality with
respect to this one.
Use this to suppress generation of the legacy important_outputs field in
the TargetComplete event. important_outputs are required for Bazel to
ResultStore integration.
Whether each format is terminated with \0 instead of newline.
--loading_phase_threads=<integer, or a keyword ("auto", "HOST_CPUS", "HOST_RAM"), optionally followed by an operation ([-|*]<float>) eg. "auto", "HOST_CPUS*.5">
Number of parallel threads to use for the loading/analysis phase.Takes an
integer, or a keyword ("auto", "HOST_CPUS", "HOST_RAM"), optionally
followed by an operation ([-|*]<float>) eg. "auto", "HOST_CPUS*.5". "auto"
sets a reasonable default based on host resources. Must be at least 1.
Specifies how and whether or not to use the lockfile. Valid values are
`update` to use the lockfile and update it if there are changes, `refresh`
to additionally refresh mutable information (yanked versions and previously
missing modules) from remote registries from time to time, `error` to use
the lockfile but throw an error if it's not up-to-date, or `off` to neither
read from or write to the lockfile.
The maximum number of Starlark computation steps that may be executed by a
BUILD file (zero means no limit).
If set, write memory usage data to the specified file at phase ends and
stable heap to master log at end of build.
Tune memory profile's computation of stable heap at end of build. Should be
and even number of integers separated by commas. In each pair the first
integer is the number of GCs to perform. The second integer in each pair is
the number of seconds to wait between GCs. Ex: 2,4,4,0 would 2 GCs with a
4sec pause, followed by 4 GCs with zero second pause
The maximum depth of the graph internal to a depset (also known as
NestedSet), above which the depset() constructor will fail.
If enabled, deps from "nodep" attributes will be included in the dependency
graph over which the query operates. A common example of a "nodep"
attribute is "visibility". Run and parse the output of `info build-
language` to learn about all the "nodep" attributes in the build language.
Output the results in dependency-ordered (default) or unordered fashion.
The unordered output is faster but only supported when --output is not
minrank, maxrank, or graph.
Expands to: --order_output=no
Whether each format is terminated with \0 instead of newline.
Expands to: --line_terminator_null=true
Output the results unordered (no), dependency-ordered (deps), or fully
ordered (full). The default is 'auto', meaning that results are output
either dependency-ordered or fully ordered, depending on the output
formatter (dependency-ordered for proto, minrank, maxrank, and graph, fully
ordered for all others). When output is fully ordered, nodes are printed in
a fully deterministic (total) order. First, all nodes are sorted
alphabetically. Then, each node in the list is used as the start of a post-
order depth-first search in which outgoing edges to unvisited nodes are
traversed in alphabetical order of the successor nodes. Finally, nodes are
printed in the reverse of the order in which they were visited.
Output the results in dependency-ordered (default) or unordered fashion.
The unordered output is faster but only supported when --output is not
minrank, maxrank, or graph.
Expands to: --order_output=auto
The format in which the query results should be printed. Allowed values for
query are: build, graph, streamed_jsonproto, label, label_kind, location,
maxrank, minrank, package, proto, streamed_proto, textproto, xml.
Override a module with a local path in the form of <module name>=<path>. If the given path is an absolute path, it will be used as it is. If the given path is a relative path, it is relative to the current working directory. If the given path starts with '%workspace%, it is relative to the workspace root, which is the output of `bazel info workspace`. If the given path is empty, then remove any previous overrides.
Override a repository with a local path in the form of <repository name>=<path>. If the given path is an absolute path, it will be used as it is. If the given path is a relative path, it is relative to the current working directory. If the given path starts with '%workspace%, it is relative to the workspace root, which is the output of `bazel info workspace`. If the given path is empty, then remove any previous overrides.
A colon-separated list of where to look for packages. Elements beginning with '%workspace%' are relative to the enclosing workspace. If omitted or empty, the default is the output of 'bazel info default-package-path'.
If set, profile Bazel and write data to the specified file. Use bazel
analyze-profile to analyze the profile.
Show the command progress in the terminal title. Useful to see what bazel is doing when having multiple terminal tabs.
If true, attributes whose value is not explicitly specified in the BUILD
file are included; otherwise they are omitted. This option is applicable to
--output=proto
Populate the definition_stack proto field, which records for each rule
instance the Starlark call stack at the moment the rule's class was defined.
If enabled, configurable attributes created by select() are flattened. For
list types the flattened representation is a list containing each value of
the select map exactly once. Scalar types are flattened to null.
Populate the source_aspect_name proto field of each Attribute with the
source aspect that the attribute came from (empty string if it did not).
Whether or not to calculate and populate the $internal_attr_hash attribute.
Populate the instantiation call stack of each rule. Note that this requires
the stack to be present
Comma separated list of attributes to include in output. Defaults to all
attributes. Set to empty string to not output any attribute. This option is
applicable to --output=proto.
Whether or not to populate the rule_input and rule_output fields.
If set, query will read the query from the file named here, rather than on
the command line. It is an error to specify a file here as well as a
command-line query.
By default, Bazel profiler will record only aggregated data for fast but
numerous events (such as statting the file). If this option is enabled,
profiler will record each event - resulting in more precise profiling data
but LARGE performance hit. Option only has effect if --profile used as well.
Specifies the registries to use to locate Bazel module dependencies. The
order is important: modules will be looked up in earlier registries first,
and only fall back to later registries when they're missing from the
earlier ones.
If true, the location of BUILD files in xml and proto outputs will be
relative. By default, the location output is an absolute path and will not
be consistent across machines. You can set this option to true to have a
consistent result across machines.
If set to 'all', all local outputs referenced by BEP are uploaded to remote cache.If set to 'minimal', local outputs referenced by BEP are not uploaded to the remote cache, except for files that are important to the consumers of BEP (e.g. test logs and timing profile). bytestream:// scheme is always used for the uri of files even if they are missing from remote cache.Default to 'minimal'.
The hostname and instance name to be used in bytestream:// URIs that are written into build event streams. This option can be set when builds are performed using a proxy, which causes the values of --remote_executor and --remote_instance_name to no longer correspond to the canonical name of the remote execution service. When not set, it will default to "${hostname}/${instance_name}".
A URI of a caching endpoint. The supported schemas are http, https, grpc, grpcs (grpc with TLS enabled) and unix (local UNIX sockets). If no schema is provided Bazel will default to grpcs. Specify grpc://, http:// or unix: schema to disable TLS. See https://bazel.build/remote/caching
If enabled, compress/decompress cache blobs with zstd when their size is at least --experimental_remote_cache_compression_threshold.
Specify a header that will be included in cache requests: --remote_cache_header=Name=Value. Multiple headers can be passed by specifying the flag multiple times. Multiple values for the same name will be converted to a comma-separated list.
Set the default exec properties to be used as the remote execution platform
if an execution platform does not already set exec_properties.
Set the default platform properties to be set for the remote execution API, if the execution platform does not already set remote_execution_properties. This value will also be used if the host platform is selected as the execution platform for remote execution.
Downloads all remote outputs to the local machine. This flag is an alias
for --remote_download_outputs=all.
Expands to: --remote_download_outputs=all
Does not download any remote build outputs to the local machine. This flag
is an alias for --remote_download_outputs=minimal.
Expands to: --remote_download_outputs=minimal
If set to 'minimal' doesn't download any remote build outputs to the local
machine, except the ones required by local actions. If set to 'toplevel'
behaves like'minimal' except that it also downloads outputs of top level
targets to the local machine. Both options can significantly reduce build
times if network bandwidth is a bottleneck.
Force remote build outputs whose path matches this pattern to be
downloaded, irrespective of --remote_download_outputs. Multiple patterns
may be specified by repeating this flag.
Instead of downloading remote build outputs to the local machine, create
symbolic links. The target of the symbolic links can be specified in the
form of a template string. This template string may contain {hash} and
{size_bytes} that expand to the hash of the object and the size in bytes,
respectively. These symbolic links may, for example, point to a FUSE file
system that loads objects from the CAS on demand.
Only downloads remote outputs of top level targets to the local machine.
This flag is an alias for --remote_download_outputs=toplevel.
Expands to: --remote_download_outputs=toplevel
Specify a header that will be included in remote downloader requests: --remote_downloader_header=Name=Value. Multiple headers can be passed by specifying the flag multiple times. Multiple values for the same name will be converted to a comma-separated list.
Specify a header that will be included in execution requests: --remote_exec_header=Name=Value. Multiple headers can be passed by specifying the flag multiple times. Multiple values for the same name will be converted to a comma-separated list.
The relative priority of actions to be executed remotely. The semantics of the particular priority values are server-dependent.
HOST or HOST:PORT of a remote execution endpoint. The supported schemas are grpc, grpcs (grpc with TLS enabled) and unix (local UNIX sockets). If no schema is provided Bazel will default to grpcs. Specify grpc:// or unix: schema to disable TLS.
If specified, a path to a file to log gRPC call related details. This log consists of a sequence of serialized com.google.devtools.build.lib.remote.logging.RemoteExecutionLog.LogEntry protobufs with each message prefixed by a varint denoting the size of the following serialized protobuf message, as performed by the method LogEntry.writeDelimitedTo(OutputStream).
Specify a header that will be included in requests: --remote_header=Name=Value. Multiple headers can be passed by specifying the flag multiple times. Multiple values for the same name will be converted to a comma-separated list.
Whether to fall back to standalone local execution strategy if remote execution fails.
No-op, deprecated. See https://github.com/bazelbuild/bazel/issues/7480 for details.
Limit the max number of concurrent connections to remote cache/executor. By
default the value is 100. Setting this to 0 means no limitation.
For HTTP remote cache, one TCP connection could handle one request at one
time, so Bazel could make up to --remote_max_connections concurrent
requests.
For gRPC remote cache/executor, one gRPC channel could usually handle 100+
concurrent requests, so Bazel could make around `--remote_max_connections *
100` concurrent requests.
Choose when to print remote execution messages. Valid values are `failure`,
to print only on failures, `success` to print only on successes and `all`
to print always.
Connect to the remote cache through a proxy. Currently this flag can only be used to configure a Unix domain socket (unix:/path/to/socket).
The relative priority of remote actions to be stored in remote cache. The semantics of the particular priority values are server-dependent.
The maximum number of attempts to retry a transient error. If set to 0, retries are disabled.
The maximum backoff delay between remote retry attempts. Following units can be used: Days (d), hours (h), minutes (m), seconds (s), and milliseconds (ms). If the unit is omitted, the value is interpreted as seconds.
The maximum amount of time to wait for remote execution and cache calls. For the REST cache, this is both the connect and the read timeout. Following units can be used: Days (d), hours (h), minutes (m), seconds (s), and milliseconds (ms). If the unit is omitted, the value is interpreted as seconds.
Whether to upload locally executed action results to the remote cache if the remote cache supports it and the user is authorized to do so.
If set to true, Bazel will compute the hash sum of all remote downloads and discard the remotely cached values if they don't match the expected value.
Specifies additional environment variables to be available only for
repository rules. Note that repository rules see the full environment
anyway, but in this way configuration information can be passed to
repositories through options without invalidating the action graph.
A list of additional repositories (beyond the hardcoded ones Bazel knows
about) where autoloads are not to be added. This should typically contain
repositories that are transitively depended on by a repository that may be
loaded automatically (and which can therefore potentially create a cycle).
Specifies the cache location of the downloaded values obtained during the
fetching of external repositories. An empty string as argument requests the
cache to be disabled, otherwise the default of
'<output_user_root>/cache/repos/v1' is used
If set, downloading using ctx.download{,_and_extract} is not allowed during
repository fetching. Note that network access is not completely disabled;
ctx.execute could still run an arbitrary executable that accesses the
Internet.
If enabled, causes Bazel to print "Loading package:" messages.
Minimum number of seconds between progress messages in the output.
Flag for advanced configuration of Bazel's internal Skyframe engine. If
Bazel detects its retained heap percentage usage exceeds the threshold set
by --skyframe_high_water_mark_threshold, when a full GC event occurs, it
will drop unnecessary temporary Skyframe state, up to this many times per
invocation. Defaults to Integer.MAX_VALUE; effectively unlimited. Zero
means that full GC events will never trigger drops. If the limit is
reached, Skyframe state will no longer be dropped when a full GC event
occurs and that retained heap percentage threshold is exceeded.
Flag for advanced configuration of Bazel's internal Skyframe engine. If
Bazel detects its retained heap percentage usage exceeds the threshold set
by --skyframe_high_water_mark_threshold, when a minor GC event occurs, it
will drop unnecessary temporary Skyframe state, up to this many times per
invocation. Defaults to Integer.MAX_VALUE; effectively unlimited. Zero
means that minor GC events will never trigger drops. If the limit is
reached, Skyframe state will no longer be dropped when a minor GC event
occurs and that retained heap percentage threshold is exceeded.
Flag for advanced configuration of Bazel's internal Skyframe engine. If
Bazel detects its retained heap percentage usage is at least this
threshold, it will drop unnecessary temporary Skyframe state. Tweaking this
may let you mitigate wall time impact of GC thrashing, when the GC
thrashing is (i) caused by the memory usage of this temporary state and
(ii) more costly than reconstituting the state when it is needed.
Slims down the size of the JSON profile by merging events if the profile
gets too large.
Writes into the specified file a pprof profile of CPU usage by all Starlark
threads.
If true, the tests() expression gives an error if it encounters a
test_suite containing non-test targets.
Specify a path to a TLS certificate that is trusted to sign server certificates.
Specify the TLS client certificate to use; you also need to provide a client key to enable client authentication.
Specify the TLS client key to use; you also need to provide a client certificate to enable client authentication.
Query: If disabled, dependencies on 'exec configuration' will not be
included in the dependency graph over which the query operates. An 'exec
configuration' dependency edge, such as the one from any 'proto_library'
rule to the Protocol Compiler, usually points to a tool executed during the
build rather than a part of the same 'target' program.
Cquery: If disabled, filters out all configured targets which cross an
execution transition from the top-level target that discovered this
configured target. That means if the top-level target is in the target
configuration, only configured targets also in the target configuration
will be returned. If the top-level target is in the exec configuration,
only exec configured targets will be returned. This option will NOT exclude
resolved toolchains.
If false, Blaze will not persist data that allows for invalidation and re-
evaluation on incremental builds in order to save memory on this build.
Subsequent builds will not have any incrementality with respect to this
one. Usually you will want to specify --batch when setting this to false.
Number of concurrent actions shown in the detailed progress bar; each
action is shown on a separate line. The progress bar always shows at least
one one, all numbers less than 1 are mapped to 1.
Specifies which events to show in the UI. It is possible to add or remove
events to the default ones using leading +/-, or override the default set
completely with direct assignment. The set of supported event kinds include
INFO, DEBUG, ERROR and more.
A comma-separated set of target patterns (additive and subtractive). The
query may be performed in the universe defined by the transitive closure of
the specified targets. This option is used for the query and cquery
commands.
For cquery, the input to this option is the targets all answers are built
under and so this option may affect configurations and transitions. If this
option is not specified, the top-level targets are assumed to be the
targets parsed from the query expression. Note: For cquery, not specifying
this option may cause the build to break if targets parsed from the query
expression are not buildable with top-level options.
Specifies the directory that should hold the external repositories in
vendor mode, whether for the purpose of fetching them into it or using them
while building. The path can be specified as either an absolute path or a
path relative to the workspace directory.
On Linux/macOS: If true, bazel tries to use the operating system's file watch service for local changes instead of scanning every file for a change. On Windows: this flag currently is a non-op but can be enabled in conjunction with --experimental_windows_watchfs. On any OS: The behavior is undefined if your workspace is on a network file system, and files are edited on a remote machine.
If true, rule attributes whose value is not explicitly specified in the
BUILD file are printed; otherwise they are omitted.