macOS |
@@ -286,7 +288,7 @@
All compilers are expected to be able to compile to the C99 language standard, as some C99 features are used in the source code. Microsoft Visual Studio doesn't fully support C99 so in practice shared code is limited to using C99 features that it does support.
gcc
The minimum accepted version of gcc is 5.0. Older versions will generate a warning by configure
and are unlikely to work.
-The JDK is currently known to be able to compile with at least version 10.2 of gcc.
+The JDK is currently known to be able to compile with at least version 11.2 of gcc.
In general, any version between these two should be usable.
clang
The minimum accepted version of clang is 3.5. Older versions will not be accepted by configure
.
@@ -568,7 +570,8 @@ x86_64-linux-gnu-to-ppc64le-linux-gnu
To be able to build, we need a "Build JDK", which is a JDK built from the current sources (that is, the same as the end result of the entire build process), but able to run on the build system, and not the target system. (In contrast, the Boot JDK should be from an older release, e.g. JDK 8 when building JDK 9.)
The build process will create a minimal Build JDK for you, as part of building. To speed up the build, you can use --with-build-jdk
to configure
to point to a pre-built Build JDK. Please note that the build result is unpredictable, and can possibly break in subtle ways, if the Build JDK does not exactly match the current sources.
-You must specify the target platform when cross-compiling. Doing so will also automatically turn the build into a cross-compiling mode. The simplest way to do this is to use the --openjdk-target
argument, e.g. --openjdk-target=arm-linux-gnueabihf
. or --openjdk-target=aarch64-oe-linux
. This will automatically set the --build
, --host
and --target
options for autoconf, which can otherwise be confusing. (In autoconf terminology, the "target" is known as "host", and "target" is used for building a Canadian cross-compiler.)
+You must specify the target platform when cross-compiling. Doing so will also automatically turn the build into a cross-compiling mode. The simplest way to do this is to use the --openjdk-target
argument, e.g. --openjdk-target=arm-linux-gnueabihf
. or --openjdk-target=aarch64-oe-linux
. This will automatically set the --host
and --target
options for autoconf, which can otherwise be confusing. (In autoconf terminology, the "target" is known as "host", and "target" is used for building a Canadian cross-compiler.)
+If --build
has not been explicitly passed to configure, --openjdk-target
will autodetect the build platform and internally set the flag automatically, otherwise the platform that was explicitly passed to --build
will be used instead.
You will need two copies of your toolchain, one which generates output that can run on the target system (the normal, or target, toolchain), and one that generates output that can run on the build system (the build toolchain). Note that cross-compiling is only supported for gcc at the time being. The gcc standard is to prefix cross-compiling toolchains with the target denominator. If you follow this standard, configure
is likely to pick up the toolchain correctly.
The build toolchain will be autodetected just the same way the normal build/target toolchain will be autodetected when not cross-compiling. If this is not what you want, or if the autodetection fails, you can specify a devkit containing the build toolchain using --with-build-devkit
to configure
, or by giving BUILD_CC
and BUILD_CXX
arguments.
@@ -887,30 +890,28 @@ spawn failed
If you need general help or advice about developing for the JDK, you can also contact the Adoption Group. See the section on Contributing to OpenJDK for more information.
Reproducible Builds
Build reproducibility is the property of getting exactly the same bits out when building, every time, independent on who builds the product, or where. This is for many reasons a harder goal than it initially appears, but it is an important goal, for security reasons and others. Please see Reproducible Builds for more information about the background and reasons for reproducible builds.
-Currently, it is not possible to build OpenJDK fully reproducibly, but getting there is an ongoing effort. There are some things you can do to minimize non-determinism and make a larger part of the build reproducible:
+Currently, it is not possible to build OpenJDK fully reproducibly, but getting there is an ongoing effort.
+An absolute prerequisite for building reproducible is to speficy a fixed build time, since time stamps are embedded in many file formats. This is done by setting the SOURCE_DATE_EPOCH
environment variable, which is an industry standard, that many tools, such as gcc, recognize, and use in place of the current time when generating output.
+To generate reproducible builds, you must set SOURCE_DATE_EPOCH
before running configure
. The value in SOURCE_DATE_EPOCH
will be stored in the configuration, and used by make
. Setting SOURCE_DATE_EPOCH
before running make
will have no effect on the build.
+You must also make sure your build does not rely on configure
's default adhoc version strings. Default adhoc version strings OPT
segment include user name and source directory. You can either override just the OPT
segment using --with-version-opt=<any fixed string>
, or you can specify the entire version string using --with-version-string=<your version>
.
+This is a typical example of how to build the JDK in a reproducible way:
+export SOURCE_DATE_EPOCH=946684800
+bash configure --with-version-opt=adhoc
+make
+Note that regardless if you specify a source date for configure
or not, the JDK build system will set SOURCE_DATE_EPOCH
for all build tools when building. If --with-source-date
has the value updated
(which is the default unless SOURCE_DATE_EPOCH
is found by in the environment by configure
), the source date value will be determined at build time.
+There are several aspects of reproducible builds that can be individually adjusted by configure
arguments. If any of these are given, they will override the value derived from SOURCE_DATE_EPOCH
. These arguments are:
-- Turn on build system support for reproducible builds
+--with-source-date
+This option controls how the JDK build sets SOURCE_DATE_EPOCH
when building. It can be set to a value describing a date, either an epoch based timestamp as an integer, or a valid ISO-8601 date.
+It can also be set to one of the special values current
, updated
or version
. current
means that the time of running configure
will be used. version
will use the nominal release date for the current JDK version. updated
, which means that SOURCE_DATE_EPOCH
will be set to the current time each time you are running make
. All choices, except for updated
, will set a fixed value for the source date timestamp.
+When SOURCE_DATE_EPOCH
is set, the default value for --with-source-date
will be the value given by SOURCE_DATE_EPOCH
. Otherwise, the default value is updated
.
+--with-hotspot-build-time
+This option controls the build time string that will be included in the hotspot library (libjvm.so
or jvm.dll
). When the source date is fixed (e.g. by setting SOURCE_DATE_EPOCH
), the default value for --with-hotspot-build-time
will be an ISO 8601 representation of that time stamp. Otherwise the default value will be the current time when building hotspot.
+--with-copyright-year
+This option controls the copyright year in some generated text files. When the source date is fixed (e.g. by setting SOURCE_DATE_EPOCH
), the default value for --with-copyright-year
will be the year of that time stamp. Otherwise the default is the current year at the time of running configure. This can be overridden by --with-copyright-year=<year>
.
+--enable-reproducible-build
+This option controls some additional behavior needed to make the build reproducible. When the source date is fixed (e.g. by setting SOURCE_DATE_EPOCH
), this flag will be turned on by default. Otherwise, the value is determined by heuristics. If it is explicitly turned off, the build might not be reproducible.
-Add the flag --enable-reproducible-build
to your configure
command line. This will turn on support for reproducible builds where it could otherwise be lacking.
-
-- Do not rely on
configure
's default adhoc version strings
-
-Default adhoc version strings OPT segment include user name, source directory and timestamp. You can either override just the OPT segment using --with-version-opt=<any fixed string>
, or you can specify the entire version string using --with-version-string=<your version>
.
-
-- Specify how the build sets
SOURCE_DATE_EPOCH
-
-The JDK build system will set the SOURCE_DATE_EPOCH
environment variable during building, depending on the value of the --with-source-date
option for configure
. The default value is updated
, which means that SOURCE_DATE_EPOCH
will be set to the current time each time you are running make
.
-The SOURCE_DATE_EPOCH
environment variable is an industry standard, that many tools, such as gcc, recognize, and use in place of the current time when generating output.
-For reproducible builds, you need to set this to a fixed value. You can use the special value version
which will use the nominal release date for the current JDK version, or a value describing a date, either an epoch based timestamp as an integer, or a valid ISO-8601 date.
-Hint: If your build environment already sets SOURCE_DATE_EPOCH
, you can propagate this using --with-source-date=$SOURCE_DATE_EPOCH
.
-
-- Specify a hotspot build time
-
-Set a fixed hotspot build time. This will be included in the hotspot library (libjvm.so
or jvm.dll
) and defaults to the current time when building hotspot. Use --with-hotspot-build-time=<any fixed string>
for reproducible builds. It's a string so you don't need to format it specifically, so e.g. n/a
will do. Another solution is to use the SOURCE_DATE_EPOCH
variable, e.g. --with-hotspot-build-time=$(date --date=@$SOURCE_DATE_EPOCH)
.
-
-The copyright year in some generated text files are normally set to the current year. This can be overridden by --with-copyright-year=<year>
. For fully reproducible builds, this needs to be set to a fixed value.
Hints and Suggestions for Advanced Users
Bash Completion
The configure
and make
commands tries to play nice with bash command-line completion (using <tab>
or <tab><tab>
). To use this functionality, make sure you enable completion in your ~/.bashrc
(see instructions for bash in your operating system).
diff --git a/doc/building.md b/doc/building.md
index 459dbaa4c410109f14246746a217a8a33027284b..6bc5857811e763b10fe423b82f7cf9eb4c2b7868 100644
--- a/doc/building.md
+++ b/doc/building.md
@@ -135,6 +135,14 @@ space is required.
If you do not have access to sufficiently powerful hardware, it is also
possible to use [cross-compiling](#cross-compiling).
+#### Branch Protection
+
+In order to use Branch Protection features in the VM, `--enable-branch-protection`
+must be used. This option requires C++ compiler support (GCC 9.1.0+ or Clang
+10+). The resulting build can be run on both machines with and without support
+for branch protection in hardware. Branch Protection is only supported for
+Linux targets.
+
### Building on 32-bit arm
This is not recommended. Instead, see the section on [Cross-compiling](
@@ -236,8 +244,8 @@ It's possible to build both Windows and Linux binaries from WSL. To build
Windows binaries, you must use a Windows boot JDK (located in a
Windows-accessible directory). To build Linux binaries, you must use a Linux
boot JDK. The default behavior is to build for Windows. To build for Linux, pass
-`--build=x86_64-unknown-linux-gnu --host=x86_64-unknown-linux-gnu` to
-`configure`.
+`--build=x86_64-unknown-linux-gnu --openjdk-target=x86_64-unknown-linux-gnu`
+to `configure`.
If building Windows binaries, the source code must be located in a Windows-
accessible directory. This is because Windows executables (such as Visual Studio
@@ -321,7 +329,7 @@ issues.
Operating system Toolchain version
------------------ -------------------------------------------------------
- Linux gcc 10.2.0
+ Linux gcc 11.2.0
macOS Apple Xcode 10.1 (using clang 10.0.0)
Windows Microsoft Visual Studio 2019 update 16.7.2
@@ -335,7 +343,7 @@ features that it does support.
The minimum accepted version of gcc is 5.0. Older versions will generate a warning
by `configure` and are unlikely to work.
-The JDK is currently known to be able to compile with at least version 10.2 of
+The JDK is currently known to be able to compile with at least version 11.2 of
gcc.
In general, any version between these two should be usable.
@@ -374,9 +382,9 @@ available for this update.
### Microsoft Visual Studio
-For aarch64 machines running Windows the minimum accepted version is Visual Studio 2019
-(16.8 or higher). For all other platforms the minimum accepted version of
-Visual Studio is 2017. Older versions will not be accepted by `configure` and will
+For aarch64 machines running Windows the minimum accepted version is Visual Studio 2019
+(16.8 or higher). For all other platforms the minimum accepted version of
+Visual Studio is 2017. Older versions will not be accepted by `configure` and will
not work. For all platforms the maximum accepted version of Visual Studio is 2022.
If you have multiple versions of Visual Studio installed, `configure` will by
@@ -978,11 +986,16 @@ You *must* specify the target platform when cross-compiling. Doing so will also
automatically turn the build into a cross-compiling mode. The simplest way to
do this is to use the `--openjdk-target` argument, e.g.
`--openjdk-target=arm-linux-gnueabihf`. or `--openjdk-target=aarch64-oe-linux`.
-This will automatically set the `--build`, `--host` and `--target` options for
+This will automatically set the `--host` and `--target` options for
autoconf, which can otherwise be confusing. (In autoconf terminology, the
"target" is known as "host", and "target" is used for building a Canadian
cross-compiler.)
+If `--build` has not been explicitly passed to configure, `--openjdk-target`
+will autodetect the build platform and internally set the flag automatically,
+otherwise the platform that was explicitly passed to `--build` will be used
+instead.
+
### Toolchain Considerations
You will need two copies of your toolchain, one which generates output that can
@@ -1514,57 +1527,85 @@ https://reproducible-builds.org) for more information about the background and
reasons for reproducible builds.
Currently, it is not possible to build OpenJDK fully reproducibly, but getting
-there is an ongoing effort. There are some things you can do to minimize
-non-determinism and make a larger part of the build reproducible:
+there is an ongoing effort.
+
+An absolute prerequisite for building reproducible is to speficy a fixed build
+time, since time stamps are embedded in many file formats. This is done by
+setting the `SOURCE_DATE_EPOCH` environment variable, which is an [industry
+standard]( https://reproducible-builds.org/docs/source-date-epoch/), that many
+tools, such as gcc, recognize, and use in place of the current time when
+generating output.
+
+To generate reproducible builds, you must set `SOURCE_DATE_EPOCH` before running
+`configure`. The value in `SOURCE_DATE_EPOCH` will be stored in the
+configuration, and used by `make`. Setting `SOURCE_DATE_EPOCH` before running
+`make` will have no effect on the build.
+
+You must also make sure your build does not rely on `configure`'s default adhoc
+version strings. Default adhoc version strings `OPT` segment include user name
+and source directory. You can either override just the `OPT` segment using
+`--with-version-opt=`, or you can specify the entire version
+string using `--with-version-string=`.
- * Turn on build system support for reproducible builds
+This is a typical example of how to build the JDK in a reproducible way:
-Add the flag `--enable-reproducible-build` to your `configure` command line.
-This will turn on support for reproducible builds where it could otherwise be
-lacking.
+```
+export SOURCE_DATE_EPOCH=946684800
+bash configure --with-version-opt=adhoc
+make
+```
- * Do not rely on `configure`'s default adhoc version strings
+Note that regardless if you specify a source date for `configure` or not, the
+JDK build system will set `SOURCE_DATE_EPOCH` for all build tools when building.
+If `--with-source-date` has the value `updated` (which is the default unless
+`SOURCE_DATE_EPOCH` is found by in the environment by `configure`), the source
+date value will be determined at build time.
-Default adhoc version strings OPT segment include user name, source directory
-and timestamp. You can either override just the OPT segment using
-`--with-version-opt=`, or you can specify the entire version
-string using `--with-version-string=`.
+There are several aspects of reproducible builds that can be individually
+adjusted by `configure` arguments. If any of these are given, they will override
+the value derived from `SOURCE_DATE_EPOCH`. These arguments are:
- * Specify how the build sets `SOURCE_DATE_EPOCH`
+ * `--with-source-date`
-The JDK build system will set the `SOURCE_DATE_EPOCH` environment variable
-during building, depending on the value of the `--with-source-date` option for
-`configure`. The default value is `updated`, which means that
-`SOURCE_DATE_EPOCH` will be set to the current time each time you are running
-`make`.
+ This option controls how the JDK build sets `SOURCE_DATE_EPOCH` when
+ building. It can be set to a value describing a date, either an epoch based
+ timestamp as an integer, or a valid ISO-8601 date.
-The [`SOURCE_DATE_EPOCH` environment variable](
-https://reproducible-builds.org/docs/source-date-epoch/) is an industry
-standard, that many tools, such as gcc, recognize, and use in place of the
-current time when generating output.
+ It can also be set to one of the special values `current`, `updated` or
+ `version`. `current` means that the time of running `configure` will be
+ used. `version` will use the nominal release date for the current JDK
+ version. `updated`, which means that `SOURCE_DATE_EPOCH` will be set to the
+ current time each time you are running `make`. All choices, except for
+ `updated`, will set a fixed value for the source date timestamp.
-For reproducible builds, you need to set this to a fixed value. You can use the
-special value `version` which will use the nominal release date for the current
-JDK version, or a value describing a date, either an epoch based timestamp as an
-integer, or a valid ISO-8601 date.
+ When `SOURCE_DATE_EPOCH` is set, the default value for `--with-source-date`
+ will be the value given by `SOURCE_DATE_EPOCH`. Otherwise, the default value
+ is `updated`.
-**Hint:** If your build environment already sets `SOURCE_DATE_EPOCH`, you can
-propagate this using `--with-source-date=$SOURCE_DATE_EPOCH`.
+ * `--with-hotspot-build-time`
+
+ This option controls the build time string that will be included in the
+ hotspot library (`libjvm.so` or `jvm.dll`). When the source date is fixed
+ (e.g. by setting `SOURCE_DATE_EPOCH`), the default value for
+ `--with-hotspot-build-time` will be an ISO 8601 representation of that time
+ stamp. Otherwise the default value will be the current time when building
+ hotspot.
- * Specify a hotspot build time
+ * `--with-copyright-year`
-Set a fixed hotspot build time. This will be included in the hotspot library
-(`libjvm.so` or `jvm.dll`) and defaults to the current time when building
-hotspot. Use `--with-hotspot-build-time=` for reproducible
-builds. It's a string so you don't need to format it specifically, so e.g. `n/a`
-will do. Another solution is to use the `SOURCE_DATE_EPOCH` variable, e.g.
-`--with-hotspot-build-time=$(date --date=@$SOURCE_DATE_EPOCH)`.
+ This option controls the copyright year in some generated text files. When
+ the source date is fixed (e.g. by setting `SOURCE_DATE_EPOCH`), the default
+ value for `--with-copyright-year` will be the year of that time stamp.
+ Otherwise the default is the current year at the time of running configure.
+ This can be overridden by `--with-copyright-year=`.
- * Copyright year
+ * `--enable-reproducible-build`
-The copyright year in some generated text files are normally set to the current
-year. This can be overridden by `--with-copyright-year=`. For fully
-reproducible builds, this needs to be set to a fixed value.
+ This option controls some additional behavior needed to make the build
+ reproducible. When the source date is fixed (e.g. by setting
+ `SOURCE_DATE_EPOCH`), this flag will be turned on by default. Otherwise, the
+ value is determined by heuristics. If it is explicitly turned off, the build
+ might not be reproducible.
## Hints and Suggestions for Advanced Users
diff --git a/doc/hotspot-style.html b/doc/hotspot-style.html
index eb0c8de2ae54bd218b5fd8ef2b7d12cd241cdcf6..c93b941c9885fbfc640175f2cfeb82f711cce3cc 100644
--- a/doc/hotspot-style.html
+++ b/doc/hotspot-style.html
@@ -68,7 +68,7 @@
Many of the guidelines mentioned here have (sometimes widespread) counterexamples in the HotSpot code base. Finding a counterexample is not sufficient justification for new code to follow the counterexample as a precedent, since readers of your code will rightfully expect your code to follow the greater bulk of precedents documented here.
Occasionally a guideline mentioned here may be just out of synch with the actual HotSpot code base. If you find that a guideline is consistently contradicted by a large number of counterexamples, please bring it up for discussion and possible change. The architectural rule, of course, is "When in Rome do as the Romans". Sometimes in the suburbs of Rome the rules are a little different; these differences can be pointed out here.
Proposed changes should be discussed on the HotSpot Developers mailing list. Changes are likely to be cautious and incremental, since HotSpot coders have been using these guidelines for years.
-Substantive changes are approved by rough consensus of the HotSpot Group Members. The Group Lead determines whether consensus has been reached.
+Substantive changes are approved by rough consensus of the HotSpot Group Members. The Group Lead determines whether consensus has been reached.
Editorial changes (changes that only affect the description of HotSpot style, not its substance) do not require the full consensus gathering process. The normal HotSpot pull request process may be used for editorial changes, with the additional requirement that the requisite reviewers are also HotSpot Group Members.
Factoring and Class Design
@@ -153,7 +153,7 @@
Whitespace
In general, don't change whitespace unless it improves readability or consistency. Gratuitous whitespace changes will make integrations and backports more difficult.
-Use One-True-Brace-Style. The opening brace for a function or class is normally at the end of the line; it is sometimes moved to the beginning of the next line for emphasis. Substatements are enclosed in braces, even if there is only a single statement. Extremely simple one-line statements may drop braces around a substatement.
+Use One-True-Brace-Style. The opening brace for a function or class is normally at the end of the line; it is sometimes moved to the beginning of the next line for emphasis. Substatements are enclosed in braces, even if there is only a single statement. Extremely simple one-line statements may drop braces around a substatement.
Indentation levels are two columns.
There is no hard line length limit. That said, bear in mind that excessively long lines can cause difficulties. Some people like to have multiple side-by-side windows in their editors, and long lines may force them to choose among unpleasant options. They can use wide windows, reducing the number that can fit across the screen, and wasting a lot of screen real estate because most lines are not that long. Alternatively, they can have more windows across the screen, with long lines wrapping (or worse, requiring scrolling to see in their entirety), which is harder to read. Similar issues exist for side-by-side code reviews.
Tabs are not allowed in code. Set your editor accordingly.
(Emacs: (setq-default indent-tabs-mode nil)
.)
@@ -210,7 +210,7 @@ while ( test_foo(args...) ) { // No, excess spaces around control
Rationale: Other than to implement exceptions (which HotSpot doesn't use), most potential uses of RTTI are better done via virtual functions. Some of the remainder can be replaced by bespoke mechanisms. The cost of the additional runtime data structures needed to support RTTI are deemed not worthwhile, given the alternatives.
Memory Allocation
Do not use the standard global allocation and deallocation functions (operator new and related functions). Use of these functions by HotSpot code is disabled for some platforms.
-Rationale: HotSpot often uses "resource" or "arena" allocation. Even where heap allocation is used, the standard global functions are avoided in favor of wrappers around malloc and free that support the VM's Native Memory Tracking (NMT) feature.
+Rationale: HotSpot often uses "resource" or "arena" allocation. Even where heap allocation is used, the standard global functions are avoided in favor of wrappers around malloc and free that support the VM's Native Memory Tracking (NMT) feature. Typically, uses of the global operator new are inadvertent and therefore often associated with memory leaks.
Native memory allocation failures are often treated as non-recoverable. The place where "out of memory" is (first) detected may be an innocent bystander, unrelated to the actual culprit.
Class Inheritance
Use public single inheritance.
@@ -270,8 +270,8 @@ while ( test_foo(args...) ) { // No, excess spaces around control
The underlying type of a scoped-enum should also be specified explicitly if conversions may be applied to values of that type.
Due to bugs in certain (very old) compilers, there is widespread use of enums and avoidance of in-class initialization of static integral constant members. Compilers having such bugs are no longer supported. Except where an enum is semantically appropriate, new code should use integral constants.
thread_local
-Do not use thread_local
(n2659); instead, use the HotSpot macro THREAD_LOCAL
. The initializer must be a constant expression.
-As was discussed in the review for JDK-8230877, thread_local
allows dynamic initialization and destruction semantics. However, that support requires a run-time penalty for references to non-function-local thread_local
variables defined in a different translation unit, even if they don't need dynamic initialization. Dynamic initialization and destruction of namespace-scoped thread local variables also has the same ordering problems as for ordinary namespace-scoped variables.
+Avoid use of thread_local
(n2659); and instead, use the HotSpot macro THREAD_LOCAL
, for which the initializer must be a constant expression. When thread_local
must be used, use the Hotspot macro APPROVED_CPP_THREAD_LOCAL
to indicate that the use has been given appropriate consideration.
+As was discussed in the review for JDK-8230877, thread_local
allows dynamic initialization and destruction semantics. However, that support requires a run-time penalty for references to non-function-local thread_local
variables defined in a different translation unit, even if they don't need dynamic initialization. Dynamic initialization and destruction of non-local thread_local
variables also has the same ordering problems as for ordinary non-local variables. So we avoid use of thread_local
in general, limiting its use to only those cases where dynamic initialization or destruction are essential. See JDK-8282469 for further discussion.
nullptr
Prefer nullptr
(n2431) to NULL
. Don't use (constexpr or literal) 0 for pointers.
For historical reasons there are widespread uses of both NULL
and of integer 0 as a pointer value.
@@ -438,7 +438,7 @@ while ( test_foo(args...) ) { // No, excess spaces around control
Inline namespaces (n2535) — HotSpot makes very limited use of namespaces.
using namespace
directives. In particular, don't use using namespace std;
to avoid needing to qualify Standard Library names.
Propagating exceptions (n2179) — HotSpot does not permit the use of exceptions, so this feature isn't useful.
-Avoid namespace-scoped variables with non-constexpr initialization. In particular, avoid variables with types requiring non-trivial initialization or destruction. Initialization order problems can be difficult to deal with and lead to surprises, as can destruction ordering. HotSpot doesn't generally try to cleanup on exit, and running destructors at exit can also lead to problems.
+Avoid non-local variables with non-constexpr initialization. In particular, avoid variables with types requiring non-trivial initialization or destruction. Initialization order problems can be difficult to deal with and lead to surprises, as can destruction ordering. HotSpot doesn't generally try to cleanup on exit, and running destructors at exit can also lead to problems.
[[deprecated]]
attribute (n3760) — Not relevant in HotSpot code.
Avoid most operator overloading, preferring named functions. When operator overloading is used, ensure the semantics conform to the normal expected behavior of the operation.
Avoid most implicit conversion constructors and (implicit or explicit) conversion operators. (Note that conversion to bool
isn't needed in HotSpot code because of the "no implicit boolean" guideline.)
diff --git a/doc/hotspot-style.md b/doc/hotspot-style.md
index 4efce0301b275b467527473d17a8e72d1ce1f551..89d9684672db0ea1960f789e3aa39bc40ccb4bee 100644
--- a/doc/hotspot-style.md
+++ b/doc/hotspot-style.md
@@ -60,7 +60,7 @@ list. Changes are likely to be cautious and incremental, since HotSpot
coders have been using these guidelines for years.
Substantive changes are approved by
-[rough consensus](https://en.wikipedia.org/wiki/Rough_consensus) of
+[rough consensus](https://www.rfc-editor.org/rfc/rfc7282.html) of
the [HotSpot Group](https://openjdk.java.net/census#hotspot) Members.
The Group Lead determines whether consensus has been reached.
@@ -294,7 +294,9 @@ well.
or consistency. Gratuitous whitespace changes will make integrations
and backports more difficult.
-* Use One-True-Brace-Style. The opening brace for a function or class
+* Use [One-True-Brace-Style](
+https://en.wikipedia.org/wiki/Indentation_style#Variant:_1TBS_(OTBS)).
+The opening brace for a function or class
is normally at the end of the line; it is sometimes moved to the
beginning of the next line for emphasis. Substatements are enclosed
in braces, even if there is only a single statement. Extremely simple
@@ -469,7 +471,9 @@ code is disabled for some platforms.
Rationale: HotSpot often uses "resource" or "arena" allocation. Even
where heap allocation is used, the standard global functions are
avoided in favor of wrappers around malloc and free that support the
-VM's Native Memory Tracking (NMT) feature.
+VM's Native Memory Tracking (NMT) feature. Typically, uses of the global
+operator new are inadvertent and therefore often associated with memory
+leaks.
Native memory allocation failures are often treated as non-recoverable.
The place where "out of memory" is (first) detected may be an innocent
@@ -629,7 +633,7 @@ Here are a few closely related example bugs:
### enum
Where appropriate, _scoped-enums_ should be used.
-([n2347](http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n2347.pdf))
+([n2347](http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n2347.pdf))
Use of _unscoped-enums_ is permitted, though ordinary constants may be
preferable when the automatic initializer feature isn't used.
@@ -649,10 +653,12 @@ integral constants.
### thread_local
-Do not use `thread_local`
+Avoid use of `thread_local`
([n2659](http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2008/n2659.htm));
-instead, use the HotSpot macro `THREAD_LOCAL`. The initializer must
-be a constant expression.
+and instead, use the HotSpot macro `THREAD_LOCAL`, for which the initializer must
+be a constant expression. When `thread_local` must be used, use the Hotspot macro
+`APPROVED_CPP_THREAD_LOCAL` to indicate that the use has been given appropriate
+consideration.
As was discussed in the review for
[JDK-8230877](https://mail.openjdk.java.net/pipermail/hotspot-dev/2019-September/039487.html),
@@ -661,14 +667,18 @@ semantics. However, that support requires a run-time penalty for
references to non-function-local `thread_local` variables defined in a
different translation unit, even if they don't need dynamic
initialization. Dynamic initialization and destruction of
-namespace-scoped thread local variables also has the same ordering
-problems as for ordinary namespace-scoped variables.
+non-local `thread_local` variables also has the same ordering
+problems as for ordinary non-local variables. So we avoid use of
+`thread_local` in general, limiting its use to only those cases where dynamic
+initialization or destruction are essential. See
+[JDK-8282469](https://bugs.openjdk.java.net/browse/JDK-8282469)
+for further discussion.
### nullptr
Prefer `nullptr`
([n2431](http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n2431.pdf))
-to `NULL`. Don't use (constexpr or literal) 0 for pointers.
+to `NULL`. Don't use (constexpr or literal) 0 for pointers.
For historical reasons there are widespread uses of both `NULL` and of
integer 0 as a pointer value.
@@ -937,7 +947,7 @@ References:
* Generalized lambda capture (init-capture) ([N3648])
* Generic (polymorphic) lambda expressions ([N3649])
-[n2657]: http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2008/n2657.htm
+[n2657]: http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2008/n2657.htm
[n2927]: http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2009/n2927.pdf
[N3648]: https://isocpp.org/files/papers/N3648.html
[N3649]: https://isocpp.org/files/papers/N3649.html
@@ -978,7 +988,7 @@ References from C++23
### Additional Permitted Features
* `constexpr`
-([n2235](http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n2235.pdf))
+([n2235](http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n2235.pdf))
([n3652](https://isocpp.org/files/papers/N3652.html))
* Sized deallocation
@@ -1064,7 +1074,7 @@ namespace std;` to avoid needing to qualify Standard Library names.
([n2179](http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n2179.html)) —
HotSpot does not permit the use of exceptions, so this feature isn't useful.
-* Avoid namespace-scoped variables with non-constexpr initialization.
+* Avoid non-local variables with non-constexpr initialization.
In particular, avoid variables with types requiring non-trivial
initialization or destruction. Initialization order problems can be
difficult to deal with and lead to surprises, as can destruction
@@ -1085,14 +1095,14 @@ in HotSpot code because of the "no implicit boolean" guideline.)
* Avoid covariant return types.
-* Avoid `goto` statements.
+* Avoid `goto` statements.
### Undecided Features
This list is incomplete; it serves to explicitly call out some
features that have not yet been discussed.
-* Trailing return type syntax for functions
+* Trailing return type syntax for functions
([n2541](http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2008/n2541.htm))
* Variable templates
@@ -1106,7 +1116,7 @@ features that have not yet been discussed.
* Rvalue references and move semantics
-[ADL]: https://en.cppreference.com/w/cpp/language/adl
+[ADL]: https://en.cppreference.com/w/cpp/language/adl
"Argument Dependent Lookup"
[ODR]: https://en.cppreference.com/w/cpp/language/definition
diff --git a/make/Hsdis.gmk b/make/Hsdis.gmk
index 02f09b320f095522d29e0e7d7e73438a10f2ee72..ec06a89aaab1f3c714a9213cefc9790b4d07fdd1 100644
--- a/make/Hsdis.gmk
+++ b/make/Hsdis.gmk
@@ -1,5 +1,5 @@
#
-# Copyright (c) 2020, 2021, Oracle and/or its affiliates. All rights reserved.
+# Copyright (c) 2020, 2022, Oracle and/or its affiliates. All rights reserved.
# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
#
# This code is free software; you can redistribute it and/or modify it
@@ -35,107 +35,165 @@ include JdkNativeCompilation.gmk
################################################################################
HSDIS_OUTPUT_DIR := $(SUPPORT_OUTPUTDIR)/hsdis
+REAL_HSDIS_NAME := hsdis-$(OPENJDK_TARGET_CPU_LEGACY_LIB)$(SHARED_LIBRARY_SUFFIX)
+BUILT_HSDIS_LIB := $(HSDIS_OUTPUT_DIR)/$(REAL_HSDIS_NAME)
+
+HSDIS_TOOLCHAIN := TOOLCHAIN_DEFAULT
+HSDIS_TOOLCHAIN_CFLAGS := $(CFLAGS_JDKLIB)
+HSDIS_TOOLCHAIN_LDFLAGS := $(LDFLAGS_JDKLIB)
+
+ifeq ($(HSDIS_BACKEND), capstone)
+ ifeq ($(call isTargetCpuArch, x86), true)
+ CAPSTONE_ARCH := CS_ARCH_X86
+ CAPSTONE_MODE := CS_MODE_$(OPENJDK_TARGET_CPU_BITS)
+ else ifeq ($(call isTargetCpuArch, aarch64), true)
+ CAPSTONE_ARCH := CS_ARCH_ARM64
+ CAPSTONE_MODE := CS_MODE_ARM
+ else
+ $(error No support for Capstone on this platform)
+ endif
-ifeq ($(call isTargetOs, windows), true)
- INSTALLED_HSDIS_DIR := $(JDK_OUTPUTDIR)/bin
+ HSDIS_CFLAGS += -DCAPSTONE_ARCH=$(CAPSTONE_ARCH) \
+ -DCAPSTONE_MODE=$(CAPSTONE_MODE)
+endif
- # On windows, we need to "fake" a completely different toolchain using gcc
- # instead of the normal microsoft toolchain. This is quite hacky...
+ifeq ($(HSDIS_BACKEND), llvm)
+ # Use C++ instead of C
+ HSDIS_TOOLCHAIN_CFLAGS := $(CXXFLAGS_JDKLIB)
+ HSDIS_TOOLCHAIN := TOOLCHAIN_LINK_CXX
+
+ ifeq ($(call isTargetOs, linux), true)
+ LLVM_OS := pc-linux-gnu
+ else ifeq ($(call isTargetOs, macosx), true)
+ LLVM_OS := apple-darwin
+ else ifeq ($(call isTargetOs, windows), true)
+ LLVM_OS := pc-windows-msvc
+ else
+ $(error No support for LLVM on this platform)
+ endif
- MINGW_BASE := x86_64-w64-mingw32
+ HSDIS_CFLAGS += -DLLVM_DEFAULT_TRIPLET='"$(OPENJDK_TARGET_CPU)-$(LLVM_OS)"'
+endif
- MINGW_SYSROOT = $(shell $(MINGW_BASE)-gcc -print-sysroot)
- ifeq ($(wildcard $(MINGW_SYSROOT)), )
- # Use fallback path
- MINGW_SYSROOT := /usr/$(MINGW_BASE)
+ifeq ($(HSDIS_BACKEND), binutils)
+ ifeq ($(call isTargetOs, windows), true)
+ # On windows, we need to "fake" a completely different toolchain using gcc
+ # instead of the normal microsoft toolchain. This is quite hacky...
+
+ MINGW_BASE := x86_64-w64-mingw32
+
+ MINGW_SYSROOT = $(shell $(MINGW_BASE)-gcc -print-sysroot)
ifeq ($(wildcard $(MINGW_SYSROOT)), )
- $(error mingw sysroot not found)
+ # Use fallback path
+ MINGW_SYSROOT := /usr/$(MINGW_BASE)
+ ifeq ($(wildcard $(MINGW_SYSROOT)), )
+ $(error mingw sysroot not found)
+ endif
endif
- endif
- $(eval $(call DefineNativeToolchain, TOOLCHAIN_MINGW, \
- CC := $(MINGW_BASE)-gcc, \
- LD := $(MINGW_BASE)-ld, \
- OBJCOPY := $(MINGW_BASE)-objcopy, \
- RC := $(RC), \
- SYSROOT_CFLAGS := --sysroot=$(MINGW_SYSROOT), \
- SYSROOT_LDFLAGS := --sysroot=$(MINGW_SYSROOT), \
- ))
-
- MINGW_SYSROOT_LIB_PATH := $(MINGW_SYSROOT)/mingw/lib
- ifeq ($(wildcard $(MINGW_SYSROOT_LIB_PATH)), )
- # Try without mingw
- MINGW_SYSROOT_LIB_PATH := $(MINGW_SYSROOT)/lib
+ $(eval $(call DefineNativeToolchain, TOOLCHAIN_MINGW, \
+ CC := $(MINGW_BASE)-gcc, \
+ LD := $(MINGW_BASE)-ld, \
+ OBJCOPY := $(MINGW_BASE)-objcopy, \
+ RC := $(RC), \
+ SYSROOT_CFLAGS := --sysroot=$(MINGW_SYSROOT), \
+ SYSROOT_LDFLAGS := --sysroot=$(MINGW_SYSROOT), \
+ ))
+
+ MINGW_SYSROOT_LIB_PATH := $(MINGW_SYSROOT)/mingw/lib
ifeq ($(wildcard $(MINGW_SYSROOT_LIB_PATH)), )
- $(error mingw sysroot lib path not found)
+ # Try without mingw
+ MINGW_SYSROOT_LIB_PATH := $(MINGW_SYSROOT)/lib
+ ifeq ($(wildcard $(MINGW_SYSROOT_LIB_PATH)), )
+ $(error mingw sysroot lib path not found)
+ endif
endif
- endif
- MINGW_VERSION = $(shell $(MINGW_BASE)-gcc -v 2>&1 | $(GREP) "gcc version" | $(CUT) -d " " -f 3)
- MINGW_GCC_LIB_PATH := /usr/lib/gcc/$(MINGW_BASE)/$(MINGW_VERSION)
- ifeq ($(wildcard $(MINGW_GCC_LIB_PATH)), )
- # Try using only major version number
- MINGW_VERSION_MAJOR := $(firstword $(subst ., , $(MINGW_VERSION)))
- MINGW_GCC_LIB_PATH := /usr/lib/gcc/$(MINGW_BASE)/$(MINGW_VERSION_MAJOR)
+ MINGW_VERSION = $(shell $(MINGW_BASE)-gcc -v 2>&1 | $(GREP) "gcc version" | $(CUT) -d " " -f 3)
+ MINGW_GCC_LIB_PATH := /usr/lib/gcc/$(MINGW_BASE)/$(MINGW_VERSION)
ifeq ($(wildcard $(MINGW_GCC_LIB_PATH)), )
- $(error mingw gcc lib path not found)
+ # Try using only major version number
+ MINGW_VERSION_MAJOR := $(firstword $(subst ., , $(MINGW_VERSION)))
+ MINGW_GCC_LIB_PATH := /usr/lib/gcc/$(MINGW_BASE)/$(MINGW_VERSION_MAJOR)
+ ifeq ($(wildcard $(MINGW_GCC_LIB_PATH)), )
+ $(error mingw gcc lib path not found)
+ endif
endif
- endif
- TOOLCHAIN_TYPE := gcc
- OPENJDK_TARGET_OS := linux
- CC_OUT_OPTION := -o$(SPACE)
- LD_OUT_OPTION := -o$(SPACE)
- GENDEPS_FLAGS := -MMD -MF
- CFLAGS_DEBUG_SYMBOLS := -g
- DISABLED_WARNINGS :=
- DISABLE_WARNING_PREFIX := -Wno-
- CFLAGS_WARNINGS_ARE_ERRORS := -Werror
- SHARED_LIBRARY_FLAGS := -shared
-
- HSDIS_TOOLCHAIN := TOOLCHAIN_MINGW
- HSDIS_TOOLCHAIN_CFLAGS :=
- HSDIS_TOOLCHAIN_LDFLAGS := -L$(MINGW_GCC_LIB_PATH) -L$(MINGW_SYSROOT_LIB_PATH)
- MINGW_DLLCRT := $(MINGW_SYSROOT_LIB_PATH)/dllcrt2.o
- HSDIS_TOOLCHAIN_LIBS := $(MINGW_DLLCRT) -lmingw32 -lgcc -lgcc_eh -lmoldname \
- -lmingwex -lmsvcrt -lpthread -ladvapi32 -lshell32 -luser32 -lkernel32
-else
- INSTALLED_HSDIS_DIR := $(JDK_OUTPUTDIR)/lib
-
- HSDIS_TOOLCHAIN := TOOLCHAIN_DEFAULT
- HSDIS_TOOLCHAIN_CFLAGS := $(CFLAGS_JDKLIB)
- HSDIS_TOOLCHAIN_LDFLAGS := $(LDFLAGS_JDKLIB)
- HSDIS_TOOLCHAIN_LIBS := -ldl
+ TOOLCHAIN_TYPE := gcc
+ OPENJDK_TARGET_OS := linux
+ CC_OUT_OPTION := -o$(SPACE)
+ LD_OUT_OPTION := -o$(SPACE)
+ GENDEPS_FLAGS := -MMD -MF
+ CFLAGS_DEBUG_SYMBOLS := -g
+ DISABLED_WARNINGS :=
+ DISABLE_WARNING_PREFIX := -Wno-
+ CFLAGS_WARNINGS_ARE_ERRORS := -Werror
+ SHARED_LIBRARY_FLAGS := -shared
+
+ HSDIS_TOOLCHAIN := TOOLCHAIN_MINGW
+ HSDIS_TOOLCHAIN_CFLAGS :=
+ HSDIS_TOOLCHAIN_LDFLAGS := -L$(MINGW_GCC_LIB_PATH) -L$(MINGW_SYSROOT_LIB_PATH)
+ MINGW_DLLCRT := $(MINGW_SYSROOT_LIB_PATH)/dllcrt2.o
+ HSDIS_TOOLCHAIN_LIBS := $(MINGW_DLLCRT) -lmingw32 -lgcc -lgcc_eh -lmoldname \
+ -lmingwex -lmsvcrt -lpthread -ladvapi32 -lshell32 -luser32 -lkernel32
+ else
+ HSDIS_TOOLCHAIN_LIBS := -ldl
+ endif
endif
-
$(eval $(call SetupJdkLibrary, BUILD_HSDIS, \
NAME := hsdis, \
- SRC := $(TOPDIR)/src/utils/hsdis, \
+ SRC := $(TOPDIR)/src/utils/hsdis/$(HSDIS_BACKEND), \
+ EXTRA_HEADER_DIRS := $(TOPDIR)/src/utils/hsdis, \
TOOLCHAIN := $(HSDIS_TOOLCHAIN), \
OUTPUT_DIR := $(HSDIS_OUTPUT_DIR), \
OBJECT_DIR := $(HSDIS_OUTPUT_DIR), \
DISABLED_WARNINGS_gcc := undef format-nonliteral sign-compare, \
DISABLED_WARNINGS_clang := undef format-nonliteral, \
CFLAGS := $(HSDIS_TOOLCHAIN_CFLAGS) $(HSDIS_CFLAGS), \
- LDFLAGS := $(HSDIS_TOOLCHAIN_LDFLAGS) $(SHARED_LIBRARY_FLAGS), \
+ LDFLAGS := $(HSDIS_TOOLCHAIN_LDFLAGS) $(HSDIS_LDFLAGS) $(SHARED_LIBRARY_FLAGS), \
LIBS := $(HSDIS_LIBS) $(HSDIS_TOOLCHAIN_LIBS), \
))
-build: $(BUILD_HSDIS)
+$(BUILT_HSDIS_LIB): $(BUILD_HSDIS_TARGET)
+ $(install-file)
+
+build: $(BUILD_HSDIS) $(BUILT_HSDIS_LIB)
TARGETS += build
-INSTALLED_HSDIS_NAME := hsdis-$(OPENJDK_TARGET_CPU_LEGACY_LIB)$(SHARED_LIBRARY_SUFFIX)
+ifeq ($(ENABLE_HSDIS_BUNDLING), false)
+
+ ifeq ($(call isTargetOs, windows), true)
+ JDK_HSDIS_DIR := $(JDK_OUTPUTDIR)/bin
+ IMAGE_HSDIS_DIR := $(JDK_IMAGE_DIR)/bin
+ else
+ JDK_HSDIS_DIR := $(JDK_OUTPUTDIR)/lib
+ IMAGE_HSDIS_DIR := $(JDK_IMAGE_DIR)/lib
+ endif
+
-INSTALLED_HSDIS := $(INSTALLED_HSDIS_DIR)/$(INSTALLED_HSDIS_NAME)
+ INSTALLED_HSDIS_JDK := $(JDK_HSDIS_DIR)/$(REAL_HSDIS_NAME)
+ INSTALLED_HSDIS_IMAGE := $(IMAGE_HSDIS_DIR)/$(REAL_HSDIS_NAME)
-$(INSTALLED_HSDIS): $(BUILD_HSDIS_TARGET)
- $(call LogWarn, NOTE: The resulting build might not be redistributable. Seek legal advice before distibuting.)
+ $(INSTALLED_HSDIS_JDK): $(BUILT_HSDIS_LIB)
+ ifeq ($(HSDIS_BACKEND), binutils)
+ $(call LogWarn, NOTE: The resulting build might not be redistributable. Seek legal advice before distributing.)
+ endif
$(install-file)
+ $(INSTALLED_HSDIS_IMAGE): $(BUILT_HSDIS_LIB)
+ $(install-file)
+
+ install: $(INSTALLED_HSDIS_JDK) $(INSTALLED_HSDIS_IMAGE)
+
+else
+
+ install:
+ $(ECHO) NOTE: make install-hsdis is a no-op with --enable-hsdis-bundling
-install: $(INSTALLED_HSDIS)
+endif
TARGETS += install
diff --git a/make/InitSupport.gmk b/make/InitSupport.gmk
index d2291c50f2161c41ebd5681cecebc4c9b2a11471..62fdc438c8a68e2e6c09717ab8d7472368bdb2e0 100644
--- a/make/InitSupport.gmk
+++ b/make/InitSupport.gmk
@@ -1,5 +1,5 @@
#
-# Copyright (c) 2011, 2021, Oracle and/or its affiliates. All rights reserved.
+# Copyright (c) 2011, 2022, Oracle and/or its affiliates. All rights reserved.
# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
#
# This code is free software; you can redistribute it and/or modify it
@@ -310,17 +310,16 @@ else # $(HAS_SPEC)=true
# level of reproducible builds
define SetupReproducibleBuild
ifeq ($$(SOURCE_DATE), updated)
- SOURCE_DATE := $$(shell $$(DATE) +"%s")
- endif
- export SOURCE_DATE_EPOCH := $$(SOURCE_DATE)
- ifeq ($$(IS_GNU_DATE), yes)
- export SOURCE_DATE_ISO_8601 := $$(shell $$(DATE) --utc \
- --date="@$$(SOURCE_DATE_EPOCH)" \
- +"%Y-%m-%dT%H:%M:%SZ" 2> /dev/null)
- else
- export SOURCE_DATE_ISO_8601 := $$(shell $$(DATE) -u \
- -j -f "%s" "$$(SOURCE_DATE_EPOCH)" \
- +"%Y-%m-%dT%H:%M:%SZ" 2> /dev/null)
+ # For static values of SOURCE_DATE (not "updated"), these are set in spec.gmk
+ export SOURCE_DATE_EPOCH := $$(shell $$(DATE) +"%s")
+ ifeq ($$(IS_GNU_DATE), yes)
+ export SOURCE_DATE_ISO_8601 := $$(shell $$(DATE) --utc \
+ --date="@$$(SOURCE_DATE_EPOCH)" +"$$(ISO_8601_FORMAT_STRING)" \
+ 2> /dev/null)
+ else
+ export SOURCE_DATE_ISO_8601 := $$(shell $$(DATE) -u -j -f "%s" \
+ "$$(SOURCE_DATE_EPOCH)" +"$$(ISO_8601_FORMAT_STRING)" 2> /dev/null)
+ endif
endif
endef
diff --git a/make/Main.gmk b/make/Main.gmk
index e5ea250bf5aad0dc77180792072e3e5bf2cce7f1..26e470f9c40a2e8bbb620e0cf963fa73bd196f13 100644
--- a/make/Main.gmk
+++ b/make/Main.gmk
@@ -535,6 +535,7 @@ ifneq ($(HSDIS_BACKEND), none)
$(eval $(call SetupTarget, install-hsdis, \
MAKEFILE := Hsdis, \
TARGET := install, \
+ DEPS := jdk-image, \
))
endif
@@ -861,6 +862,10 @@ else
$(foreach t, $(filter-out java.base-libs, $(LIBS_TARGETS)), \
$(eval $t: java.base-libs))
+ ifeq ($(ENABLE_HSDIS_BUNDLING), true)
+ java.base-copy: build-hsdis
+ endif
+
# jdk.accessibility depends on java.desktop
jdk.accessibility-libs: java.desktop-libs
diff --git a/make/ModuleWrapper.gmk b/make/ModuleWrapper.gmk
index e4a8db24aa3c988a7c2885f97140de61be9f3ab7..d83af819a9bc46d053e71cfdac2311b46ea97531 100644
--- a/make/ModuleWrapper.gmk
+++ b/make/ModuleWrapper.gmk
@@ -1,5 +1,5 @@
#
-# Copyright (c) 2014, 2020, Oracle and/or its affiliates. All rights reserved.
+# Copyright (c) 2014, 2022, Oracle and/or its affiliates. All rights reserved.
# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
#
# This code is free software; you can redistribute it and/or modify it
@@ -35,6 +35,8 @@ default: all
include $(SPEC)
include MakeBase.gmk
+MODULE_SRC := $(TOPDIR)/src/$(MODULE)
+
# All makefiles should add the targets to be built to this variable.
TARGETS :=
diff --git a/make/ToolsJdk.gmk b/make/ToolsJdk.gmk
index af9def3a415ad5cf5b00ed12acca94cde7715d4c..9eef6969125a289dfa13760034f6e042bde304e4 100644
--- a/make/ToolsJdk.gmk
+++ b/make/ToolsJdk.gmk
@@ -1,5 +1,5 @@
#
-# Copyright (c) 2011, 2021, Oracle and/or its affiliates. All rights reserved.
+# Copyright (c) 2011, 2022, Oracle and/or its affiliates. All rights reserved.
# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
#
# This code is free software; you can redistribute it and/or modify it
@@ -53,7 +53,7 @@ TOOL_GENERATECHARACTER = $(JAVA_SMALL) -cp $(BUILDTOOLS_OUTPUTDIR)/jdk_tools_cla
TOOL_CHARACTERNAME = $(JAVA_SMALL) -cp $(BUILDTOOLS_OUTPUTDIR)/jdk_tools_classes \
build.tools.generatecharacter.CharacterName
-TOOL_DTDBUILDER = $(JAVA_SMALL) -Ddtd_home=$(TOPDIR)/make/data/dtdbuilder \
+TOOL_DTDBUILDER = $(JAVA_SMALL) -Ddtd_home=$(TOPDIR)/src/java.desktop/share/data/dtdbuilder \
-Djava.awt.headless=true \
-cp $(BUILDTOOLS_OUTPUTDIR)/jdk_tools_classes build.tools.dtdbuilder.DTDBuilder
diff --git a/make/UpdateX11Wrappers.gmk b/make/UpdateX11Wrappers.gmk
index ad67966ec8a7b32f47bbb86d9016e7e70dc3bf0a..3201b5f883f7130d6b874a1fd99bb8038ffb9290 100644
--- a/make/UpdateX11Wrappers.gmk
+++ b/make/UpdateX11Wrappers.gmk
@@ -1,5 +1,5 @@
#
-# Copyright (c) 2012, 2019, Oracle and/or its affiliates. All rights reserved.
+# Copyright (c) 2012, 2022, Oracle and/or its affiliates. All rights reserved.
# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
#
# This code is free software; you can redistribute it and/or modify it
@@ -50,7 +50,7 @@ endif
X11WRAPPERS_OUTPUT := $(SUPPORT_OUTPUTDIR)/x11wrappers
GENERATOR_SOURCE_FILE := $(X11WRAPPERS_OUTPUT)/src/data_generator.c
-GENSRC_X11WRAPPERS_DATADIR := $(TOPDIR)/make/data/x11wrappergen
+GENSRC_X11WRAPPERS_DATADIR := $(TOPDIR)/src/java.desktop/unix/data/x11wrappergen
WRAPPER_OUTPUT_FILE := $(GENSRC_X11WRAPPERS_DATADIR)/sizes-$(BITS).txt
BITS := $(OPENJDK_TARGET_CPU_BITS)
diff --git a/make/autoconf/basic_tools.m4 b/make/autoconf/basic_tools.m4
index 9ce90303d452d5f9705d9e195b0f5e6d59fff808..1611e9fd5312ee8ceb46f1a35cdaf926700daf70 100644
--- a/make/autoconf/basic_tools.m4
+++ b/make/autoconf/basic_tools.m4
@@ -80,6 +80,7 @@ AC_DEFUN_ONCE([BASIC_SETUP_FUNDAMENTAL_TOOLS],
# Optional tools, we can do without them
UTIL_LOOKUP_PROGS(DF, df)
+ UTIL_LOOKUP_PROGS(GIT, git)
UTIL_LOOKUP_PROGS(NICE, nice)
UTIL_LOOKUP_PROGS(READLINK, greadlink readlink)
@@ -339,7 +340,6 @@ AC_DEFUN_ONCE([BASIC_SETUP_COMPLEX_TOOLS],
UTIL_LOOKUP_PROGS(READELF, greadelf readelf)
UTIL_LOOKUP_PROGS(DOT, dot)
UTIL_LOOKUP_PROGS(HG, hg)
- UTIL_LOOKUP_PROGS(GIT, git)
UTIL_LOOKUP_PROGS(STAT, stat)
UTIL_LOOKUP_PROGS(TIME, time)
UTIL_LOOKUP_PROGS(FLOCK, flock)
diff --git a/make/autoconf/basic_windows.m4 b/make/autoconf/basic_windows.m4
index 25d10d9b8fee848f4cd366dbc2edf4aa22aef508..fb6fc526bfa219d361aae8783dfbc47c98d3bcc6 100644
--- a/make/autoconf/basic_windows.m4
+++ b/make/autoconf/basic_windows.m4
@@ -1,5 +1,5 @@
#
-# Copyright (c) 2011, 2021, Oracle and/or its affiliates. All rights reserved.
+# Copyright (c) 2011, 2022, Oracle and/or its affiliates. All rights reserved.
# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
#
# This code is free software; you can redistribute it and/or modify it
@@ -185,6 +185,16 @@ AC_DEFUN([BASIC_SETUP_PATHS_WINDOWS],
AC_MSG_RESULT([unknown])
AC_MSG_WARN([It seems that your find utility is non-standard.])
fi
+
+ if test "x$GIT" != x && test -e $TOPDIR/.git; then
+ git_autocrlf=`$GIT config core.autocrlf`
+ if test "x$git_autocrlf" != x && test "x$git_autocrlf" != "xfalse"; then
+ AC_MSG_NOTICE([Your git configuration does not set core.autocrlf to false.])
+ AC_MSG_NOTICE([If you checked out this code using that setting, the build WILL fail.])
+ AC_MSG_NOTICE([To correct, run "git config --global core.autocrlf false" and re-clone the repo.])
+ AC_MSG_WARN([Code is potentially incorrectly cloned. HIGH RISK of build failure!])
+ fi
+ fi
])
# Verify that the directory is usable on Windows
diff --git a/make/autoconf/compare.sh.in b/make/autoconf/compare.sh.in
index 1c48f800c8a3430cca9137c19ddb74815505b791..542a516ebc4475cc05d3be52c54e8aedece4b5d0 100644
--- a/make/autoconf/compare.sh.in
+++ b/make/autoconf/compare.sh.in
@@ -53,7 +53,7 @@ export LDD="@LDD@"
export LN="@LN@"
export MKDIR="@MKDIR@"
export MV="@MV@"
-export NM="@GNM@"
+export NM="@NM@"
export OBJDUMP="@OBJDUMP@"
export OTOOL="@OTOOL@"
export PRINTF="@PRINTF@"
diff --git a/make/autoconf/configure b/make/autoconf/configure
index 7e0ece129f4dfb2f9354c7406df590e199c01140..4b26e3d706147fdb04158f5dd3133f130d0c0246 100644
--- a/make/autoconf/configure
+++ b/make/autoconf/configure
@@ -274,11 +274,11 @@ do
# Check for certain autoconf options that require extra action
case $conf_option in
-build | --build | --buil | --bui | --bu |-build=* | --build=* | --buil=* | --bui=* | --bu=*)
- conf_legacy_crosscompile="$conf_legacy_crosscompile $conf_option" ;;
+ conf_build_set=true ;;
-target | --target | --targe | --targ | --tar | --ta | --t | -target=* | --target=* | --targe=* | --targ=* | --tar=* | --ta=* | --t=*)
- conf_legacy_crosscompile="$conf_legacy_crosscompile $conf_option" ;;
+ conf_incompatible_crosscompile="$conf_incompatible_crosscompile $conf_option" ;;
-host | --host | --hos | --ho | -host=* | --host=* | --hos=* | --ho=*)
- conf_legacy_crosscompile="$conf_legacy_crosscompile $conf_option" ;;
+ conf_incompatible_crosscompile="$conf_incompatible_crosscompile $conf_option" ;;
-help | --help | --hel | --he | -h)
conf_print_help=true ;;
esac
@@ -287,23 +287,30 @@ done
# Save the quoted command line
CONFIGURE_COMMAND_LINE="${conf_quoted_arguments[@]}"
-if test "x$conf_legacy_crosscompile" != "x"; then
+if test "x$conf_incompatible_crosscompile" != "x"; then
if test "x$conf_openjdk_target" != "x"; then
- echo "Error: Specifying --openjdk-target together with autoconf"
- echo "legacy cross-compilation flags is not supported."
- echo "You specified: --openjdk-target=$conf_openjdk_target and $conf_legacy_crosscompile."
- echo "The recommended use is just --openjdk-target."
+ echo "Error: --openjdk-target was specified together with"
+ echo "incompatible autoconf cross-compilation flags."
+ echo "You specified: --openjdk-target=$conf_openjdk_target and $conf_incompatible_crosscompile."
+ echo "It is recommended that you only use --openjdk-target."
exit 1
else
- echo "Warning: You are using legacy autoconf cross-compilation flags."
- echo "It is recommended that you use --openjdk-target instead."
+ echo "Warning: You are using misleading autoconf cross-compilation flag(s)."
+ echo "This is not encouraged as use of such flags during building can"
+ echo "quickly become confusing."
+ echo "It is highly recommended that you use --openjdk-target instead."
echo ""
fi
fi
if test "x$conf_openjdk_target" != "x"; then
- conf_build_platform=`sh $conf_script_dir/build-aux/config.guess`
- conf_processed_arguments=("--build=$conf_build_platform" "--host=$conf_openjdk_target" "--target=$conf_openjdk_target" "${conf_processed_arguments[@]}")
+ conf_processed_arguments=("--host=$conf_openjdk_target" "--target=$conf_openjdk_target" "${conf_processed_arguments[@]}")
+
+ # If --build has been explicitly set don't override that flag with our own
+ if test "x$conf_build_set" != xtrue; then
+ conf_build_platform=`sh $conf_script_dir/build-aux/config.guess`
+ conf_processed_arguments=("--build=$conf_build_platform" "${conf_processed_arguments[@]}")
+ fi
fi
# Make configure exit with error on invalid options as default.
@@ -341,7 +348,9 @@ Additional (non-autoconf) OpenJDK Options:
--openjdk-target=TARGET cross-compile with TARGET as target platform
(i.e. the one you will run the resulting binary on).
Equivalent to --host=TARGET --target=TARGET
- --build=
+ --build=, or the platform you
+ have provided if you have explicitly passed
+ --build to configure
--debug-configure Run the configure script with additional debug
logging enabled.
diff --git a/make/autoconf/configure.ac b/make/autoconf/configure.ac
index 29ed3f206aa46722b6b50bbb8d3ed047c764a214..5e58696e34d38a2742c1c75dbd39d9d9bb904189 100644
--- a/make/autoconf/configure.ac
+++ b/make/autoconf/configure.ac
@@ -1,5 +1,5 @@
#
-# Copyright (c) 2011, 2021, Oracle and/or its affiliates. All rights reserved.
+# Copyright (c) 2011, 2022, Oracle and/or its affiliates. All rights reserved.
# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
#
# This code is free software; you can redistribute it and/or modify it
@@ -152,6 +152,7 @@ BOOTJDK_SETUP_DOCS_REFERENCE_JDK
#
###############################################################################
+JDKOPT_SETUP_REPRODUCIBLE_BUILD
JDKOPT_SETUP_JDK_OPTIONS
###############################################################################
@@ -207,7 +208,6 @@ PLATFORM_SETUP_OPENJDK_TARGET_BITS
PLATFORM_SETUP_OPENJDK_TARGET_ENDIANNESS
# Configure flags for the tools. Need to know if we should build reproducible.
-JDKOPT_SETUP_REPRODUCIBLE_BUILD
FLAGS_SETUP_FLAGS
# Setup debug symbols (need objcopy from the toolchain for that)
@@ -249,7 +249,6 @@ JDKOPT_EXCLUDE_TRANSLATIONS
JDKOPT_ENABLE_DISABLE_MANPAGES
JDKOPT_ENABLE_DISABLE_CDS_ARCHIVE
JDKOPT_ENABLE_DISABLE_COMPATIBLE_CDS_ALIGNMENT
-JDKOPT_SETUP_HSDIS
###############################################################################
#
diff --git a/make/autoconf/flags-cflags.m4 b/make/autoconf/flags-cflags.m4
index 76724235ec4d703e431d1672ba244fa2d7f26bc8..2d7732e6c66b668cfd90bc611c4a0d90bb868634 100644
--- a/make/autoconf/flags-cflags.m4
+++ b/make/autoconf/flags-cflags.m4
@@ -1,5 +1,5 @@
#
-# Copyright (c) 2011, 2021, Oracle and/or its affiliates. All rights reserved.
+# Copyright (c) 2011, 2022, Oracle and/or its affiliates. All rights reserved.
# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
#
# This code is free software; you can redistribute it and/or modify it
@@ -496,8 +496,8 @@ AC_DEFUN([FLAGS_SETUP_CFLAGS_HELPER],
TOOLCHAIN_CFLAGS_JVM="-qtbtable=full -qtune=balanced \
-qalias=noansi -qstrict -qtls=default -qnortti -qnoeh -qignerrno -qstackprotect"
elif test "x$TOOLCHAIN_TYPE" = xmicrosoft; then
- TOOLCHAIN_CFLAGS_JVM="-nologo -MD -MP"
- TOOLCHAIN_CFLAGS_JDK="-nologo -MD -Zc:wchar_t-"
+ TOOLCHAIN_CFLAGS_JVM="-nologo -MD -Zc:strictStrings -MP"
+ TOOLCHAIN_CFLAGS_JDK="-nologo -MD -Zc:strictStrings -Zc:wchar_t-"
fi
# CFLAGS C language level for JDK sources (hotspot only uses C++)
@@ -803,17 +803,19 @@ AC_DEFUN([FLAGS_SETUP_CFLAGS_CPU_DEP],
fi
AC_SUBST(FILE_MACRO_CFLAGS)
+ FLAGS_SETUP_BRANCH_PROTECTION
+
# EXPORT to API
CFLAGS_JVM_COMMON="$ALWAYS_CFLAGS_JVM $ALWAYS_DEFINES_JVM \
$TOOLCHAIN_CFLAGS_JVM ${$1_TOOLCHAIN_CFLAGS_JVM} \
$OS_CFLAGS $OS_CFLAGS_JVM $CFLAGS_OS_DEF_JVM $DEBUG_CFLAGS_JVM \
$WARNING_CFLAGS $WARNING_CFLAGS_JVM $JVM_PICFLAG $FILE_MACRO_CFLAGS \
- $REPRODUCIBLE_CFLAGS"
+ $REPRODUCIBLE_CFLAGS $BRANCH_PROTECTION_CFLAGS"
CFLAGS_JDK_COMMON="$ALWAYS_CFLAGS_JDK $ALWAYS_DEFINES_JDK $TOOLCHAIN_CFLAGS_JDK \
$OS_CFLAGS $CFLAGS_OS_DEF_JDK $DEBUG_CFLAGS_JDK $DEBUG_OPTIONS_FLAGS_JDK \
$WARNING_CFLAGS $WARNING_CFLAGS_JDK $DEBUG_SYMBOLS_CFLAGS_JDK \
- $FILE_MACRO_CFLAGS $REPRODUCIBLE_CFLAGS"
+ $FILE_MACRO_CFLAGS $REPRODUCIBLE_CFLAGS $BRANCH_PROTECTION_CFLAGS"
# Use ${$2EXTRA_CFLAGS} to block EXTRA_CFLAGS to be added to build flags.
# (Currently we don't have any OPENJDK_BUILD_EXTRA_CFLAGS, but that might
@@ -879,3 +881,24 @@ AC_DEFUN([FLAGS_SETUP_GCC6_COMPILER_FLAGS],
PREFIX: $2, IF_FALSE: [NO_LIFETIME_DSE_CFLAG=""])
$1_GCC6_CFLAGS="${NO_DELETE_NULL_POINTER_CHECKS_CFLAG} ${NO_LIFETIME_DSE_CFLAG}"
])
+
+AC_DEFUN_ONCE([FLAGS_SETUP_BRANCH_PROTECTION],
+[
+ # Is branch protection available?
+ BRANCH_PROTECTION_AVAILABLE=false
+ BRANCH_PROTECTION_FLAG="-mbranch-protection=standard"
+
+ if test "x$OPENJDK_TARGET_CPU" = xaarch64; then
+ if test "x$TOOLCHAIN_TYPE" = xgcc || test "x$TOOLCHAIN_TYPE" = xclang; then
+ FLAGS_COMPILER_CHECK_ARGUMENTS(ARGUMENT: [${BRANCH_PROTECTION_FLAG}],
+ IF_TRUE: [BRANCH_PROTECTION_AVAILABLE=true])
+ fi
+ fi
+
+ BRANCH_PROTECTION_CFLAGS=""
+ UTIL_ARG_ENABLE(NAME: branch-protection, DEFAULT: false,
+ RESULT: USE_BRANCH_PROTECTION, AVAILABLE: $BRANCH_PROTECTION_AVAILABLE,
+ DESC: [enable branch protection when compiling C/C++],
+ IF_ENABLED: [ BRANCH_PROTECTION_CFLAGS=${BRANCH_PROTECTION_FLAG}])
+ AC_SUBST(BRANCH_PROTECTION_CFLAGS)
+])
diff --git a/make/autoconf/flags-ldflags.m4 b/make/autoconf/flags-ldflags.m4
index e9d4557f8665ed879976b5ba04791ec106a54b13..457690ac39165621d505458530cf7c8741319e5a 100644
--- a/make/autoconf/flags-ldflags.m4
+++ b/make/autoconf/flags-ldflags.m4
@@ -1,5 +1,5 @@
#
-# Copyright (c) 2011, 2021, Oracle and/or its affiliates. All rights reserved.
+# Copyright (c) 2011, 2022, Oracle and/or its affiliates. All rights reserved.
# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
#
# This code is free software; you can redistribute it and/or modify it
@@ -77,7 +77,7 @@ AC_DEFUN([FLAGS_SETUP_LDFLAGS_HELPER],
-fPIC"
elif test "x$TOOLCHAIN_TYPE" = xxlc; then
- BASIC_LDFLAGS="-b64 -brtl -bnorwexec -bnolibpath -bexpall -bernotok -btextpsize:64K \
+ BASIC_LDFLAGS="-b64 -brtl -bnorwexec -bnolibpath -bnoexpall -bernotok -btextpsize:64K \
-bdatapsize:64K -bstackpsize:64K"
# libjvm.so has gotten too large for normal TOC size; compile with qpic=large and link with bigtoc
BASIC_LDFLAGS_JVM_ONLY="-Wl,-lC_r -bbigtoc"
@@ -95,13 +95,10 @@ AC_DEFUN([FLAGS_SETUP_LDFLAGS_HELPER],
fi
# Setup OS-dependent LDFLAGS
- if test "x$TOOLCHAIN_TYPE" = xclang || test "x$TOOLCHAIN_TYPE" = xgcc; then
- if test "x$OPENJDK_TARGET_OS" = xmacosx; then
- # Assume clang or gcc.
- # FIXME: We should really generalize SET_SHARED_LIBRARY_ORIGIN instead.
- OS_LDFLAGS_JVM_ONLY="-Wl,-rpath,@loader_path/. -Wl,-rpath,@loader_path/.."
- OS_LDFLAGS="-mmacosx-version-min=$MACOSX_VERSION_MIN"
- fi
+ if test "x$OPENJDK_TARGET_OS" = xmacosx && test "x$TOOLCHAIN_TYPE" = xclang; then
+ # FIXME: We should really generalize SET_SHARED_LIBRARY_ORIGIN instead.
+ OS_LDFLAGS_JVM_ONLY="-Wl,-rpath,@loader_path/. -Wl,-rpath,@loader_path/.."
+ OS_LDFLAGS="-mmacosx-version-min=$MACOSX_VERSION_MIN"
fi
# Setup debug level-dependent LDFLAGS
diff --git a/make/autoconf/help.m4 b/make/autoconf/help.m4
index 09e82e36c94cd3ba3e1391e3b3703ca3cacb445a..3d6963c7d4d09553d115d335161a6aae76bb0894 100644
--- a/make/autoconf/help.m4
+++ b/make/autoconf/help.m4
@@ -117,6 +117,8 @@ apt_help() {
PKGHANDLER_COMMAND="sudo apt-get install ccache" ;;
dtrace)
PKGHANDLER_COMMAND="sudo apt-get install systemtap-sdt-dev" ;;
+ capstone)
+ PKGHANDLER_COMMAND="sudo apt-get install libcapstone-dev" ;;
esac
}
@@ -168,6 +170,8 @@ brew_help() {
PKGHANDLER_COMMAND="brew install freetype" ;;
ccache)
PKGHANDLER_COMMAND="brew install ccache" ;;
+ capstone)
+ PKGHANDLER_COMMAND="brew install capstone" ;;
esac
}
@@ -292,6 +296,13 @@ AC_DEFUN_ONCE([HELP_PRINT_SUMMARY_AND_WARNINGS],
printf "* OpenJDK target: OS: $OPENJDK_TARGET_OS, CPU architecture: $OPENJDK_TARGET_CPU_ARCH, address length: $OPENJDK_TARGET_CPU_BITS\n"
printf "* Version string: $VERSION_STRING ($VERSION_SHORT)\n"
+ if test "x$SOURCE_DATE" != xupdated; then
+ source_date_info="$SOURCE_DATE ($SOURCE_DATE_ISO_8601)"
+ else
+ source_date_info="Determined at build time"
+ fi
+ printf "* Source date: $source_date_info\n"
+
printf "\n"
printf "Tools summary:\n"
if test "x$OPENJDK_BUILD_OS" = "xwindows"; then
diff --git a/make/autoconf/hotspot.m4 b/make/autoconf/hotspot.m4
index 1cac6bb00c666e814d2e8cfd557661d0327d1d06..18f46036fe5243f940e2f6d2884ed9cdd211791b 100644
--- a/make/autoconf/hotspot.m4
+++ b/make/autoconf/hotspot.m4
@@ -1,5 +1,5 @@
#
-# Copyright (c) 2011, 2020, Oracle and/or its affiliates. All rights reserved.
+# Copyright (c) 2011, 2022, Oracle and/or its affiliates. All rights reserved.
# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
#
# This code is free software; you can redistribute it and/or modify it
@@ -114,12 +114,26 @@ AC_DEFUN_ONCE([HOTSPOT_SETUP_MISC],
HOTSPOT_TARGET_CPU_ARCH=zero
fi
+
AC_ARG_WITH([hotspot-build-time], [AS_HELP_STRING([--with-hotspot-build-time],
- [timestamp to use in hotspot version string, empty for on-the-fly @<:@empty@:>@])])
+ [timestamp to use in hotspot version string, empty means determined at build time @<:@source-date/empty@:>@])])
+
+ AC_MSG_CHECKING([what hotspot build time to use])
if test "x$with_hotspot_build_time" != x; then
HOTSPOT_BUILD_TIME="$with_hotspot_build_time"
+ AC_MSG_RESULT([$HOTSPOT_BUILD_TIME (from --with-hotspot-build-time)])
+ else
+ if test "x$SOURCE_DATE" = xupdated; then
+ HOTSPOT_BUILD_TIME=""
+ AC_MSG_RESULT([determined at build time (default)])
+ else
+ # If we have a fixed value for SOURCE_DATE, use it as default
+ HOTSPOT_BUILD_TIME="$SOURCE_DATE_ISO_8601"
+ AC_MSG_RESULT([$HOTSPOT_BUILD_TIME (from --with-source-date)])
+ fi
fi
+
AC_SUBST(HOTSPOT_BUILD_TIME)
diff --git a/make/autoconf/jdk-options.m4 b/make/autoconf/jdk-options.m4
index 0a7145c9116a4f4d90f1e9f354c7b78aef25a7b6..2034934cd733241d48ce87044269cb6fc0c06c75 100644
--- a/make/autoconf/jdk-options.m4
+++ b/make/autoconf/jdk-options.m4
@@ -211,16 +211,16 @@ AC_DEFUN_ONCE([JDKOPT_SETUP_JDK_OPTIONS],
# Setup default copyright year. Mostly overridden when building close to a new year.
AC_ARG_WITH(copyright-year, [AS_HELP_STRING([--with-copyright-year],
- [Set copyright year value for build @<:@current year@:>@])])
+ [Set copyright year value for build @<:@current year/source-date@:>@])])
if test "x$with_copyright_year" = xyes; then
AC_MSG_ERROR([Copyright year must have a value])
elif test "x$with_copyright_year" != x; then
COPYRIGHT_YEAR="$with_copyright_year"
- elif test "x$SOURCE_DATE_EPOCH" != x; then
+ elif test "x$SOURCE_DATE" != xupdated; then
if test "x$IS_GNU_DATE" = xyes; then
- COPYRIGHT_YEAR=`date --date=@$SOURCE_DATE_EPOCH +%Y`
+ COPYRIGHT_YEAR=`$DATE --date=@$SOURCE_DATE +%Y`
else
- COPYRIGHT_YEAR=`date -j -f %s $SOURCE_DATE_EPOCH +%Y`
+ COPYRIGHT_YEAR=`$DATE -j -f %s $SOURCE_DATE +%Y`
fi
else
COPYRIGHT_YEAR=`$DATE +'%Y'`
@@ -662,15 +662,28 @@ AC_DEFUN([JDKOPT_ALLOW_ABSOLUTE_PATHS_IN_OUTPUT],
AC_DEFUN_ONCE([JDKOPT_SETUP_REPRODUCIBLE_BUILD],
[
AC_ARG_WITH([source-date], [AS_HELP_STRING([--with-source-date],
- [how to set SOURCE_DATE_EPOCH ('updated', 'current', 'version' a timestamp or an ISO-8601 date) @<:@updated@:>@])],
+ [how to set SOURCE_DATE_EPOCH ('updated', 'current', 'version' a timestamp or an ISO-8601 date) @<:@updated/value of SOURCE_DATE_EPOCH@:>@])],
[with_source_date_present=true], [with_source_date_present=false])
+ if test "x$SOURCE_DATE_EPOCH" != x && test "x$with_source_date" != x; then
+ AC_MSG_WARN([--with-source-date will override SOURCE_DATE_EPOCH])
+ fi
+
AC_MSG_CHECKING([what source date to use])
if test "x$with_source_date" = xyes; then
AC_MSG_ERROR([--with-source-date must have a value])
- elif test "x$with_source_date" = xupdated || test "x$with_source_date" = x; then
- # Tell the makefiles to update at each build
+ elif test "x$with_source_date" = x; then
+ if test "x$SOURCE_DATE_EPOCH" != x; then
+ SOURCE_DATE=$SOURCE_DATE_EPOCH
+ with_source_date_present=true
+ AC_MSG_RESULT([$SOURCE_DATE, from SOURCE_DATE_EPOCH])
+ else
+ # Tell the makefiles to update at each build
+ SOURCE_DATE=updated
+ AC_MSG_RESULT([determined at build time (default)])
+ fi
+ elif test "x$with_source_date" = xupdated; then
SOURCE_DATE=updated
AC_MSG_RESULT([determined at build time, from 'updated'])
elif test "x$with_source_date" = xcurrent; then
@@ -702,6 +715,18 @@ AC_DEFUN_ONCE([JDKOPT_SETUP_REPRODUCIBLE_BUILD],
fi
fi
+ ISO_8601_FORMAT_STRING="%Y-%m-%dT%H:%M:%SZ"
+ if test "x$SOURCE_DATE" != xupdated; then
+ # If we have a fixed value for SOURCE_DATE, we need to set SOURCE_DATE_EPOCH
+ # for the rest of configure.
+ SOURCE_DATE_EPOCH="$SOURCE_DATE"
+ if test "x$IS_GNU_DATE" = xyes; then
+ SOURCE_DATE_ISO_8601=`$DATE --utc --date="@$SOURCE_DATE" +"$ISO_8601_FORMAT_STRING" 2> /dev/null`
+ else
+ SOURCE_DATE_ISO_8601=`$DATE -u -j -f "%s" "$SOURCE_DATE" +"$ISO_8601_FORMAT_STRING" 2> /dev/null`
+ fi
+ fi
+
REPRODUCIBLE_BUILD_DEFAULT=$with_source_date_present
if test "x$OPENJDK_BUILD_OS" = xwindows && \
@@ -726,174 +751,6 @@ AC_DEFUN_ONCE([JDKOPT_SETUP_REPRODUCIBLE_BUILD],
AC_SUBST(SOURCE_DATE)
AC_SUBST(ENABLE_REPRODUCIBLE_BUILD)
-])
-
-################################################################################
-#
-# Helper function to build binutils from source.
-#
-AC_DEFUN([JDKOPT_BUILD_BINUTILS],
-[
- BINUTILS_SRC="$with_binutils_src"
- UTIL_FIXUP_PATH(BINUTILS_SRC)
-
- if ! test -d $BINUTILS_SRC; then
- AC_MSG_ERROR([--with-binutils-src is not pointing to a directory])
- fi
- if ! test -x $BINUTILS_SRC/configure; then
- AC_MSG_ERROR([--with-binutils-src does not look like a binutils source directory])
- fi
-
- if test -e $BINUTILS_SRC/bfd/libbfd.a && \
- test -e $BINUTILS_SRC/opcodes/libopcodes.a && \
- test -e $BINUTILS_SRC/libiberty/libiberty.a && \
- test -e $BINUTILS_SRC/zlib/libz.a; then
- AC_MSG_NOTICE([Found binutils binaries in binutils source directory -- not building])
- else
- # On Windows, we cannot build with the normal Microsoft CL, but must instead use
- # a separate mingw toolchain.
- if test "x$OPENJDK_BUILD_OS" = xwindows; then
- if test "x$OPENJDK_TARGET_CPU" = "xx86"; then
- target_base="i686-w64-mingw32"
- else
- target_base="$OPENJDK_TARGET_CPU-w64-mingw32"
- fi
- binutils_cc="$target_base-gcc"
- binutils_target="--host=$target_base --target=$target_base"
- # Somehow the uint typedef is not included when building with mingw
- binutils_cflags="-Duint=unsigned"
- compiler_version=`$binutils_cc --version 2>&1`
- if ! [ [[ "$compiler_version" =~ GCC ]] ]; then
- AC_MSG_NOTICE([Could not find correct mingw compiler $binutils_cc.])
- HELP_MSG_MISSING_DEPENDENCY([$binutils_cc])
- AC_MSG_ERROR([Cannot continue. $HELP_MSG])
- else
- AC_MSG_NOTICE([Using compiler $binutils_cc with version $compiler_version])
- fi
- elif test "x$OPENJDK_BUILD_OS" = xmacosx; then
- if test "x$OPENJDK_TARGET_CPU" = "xaarch64"; then
- binutils_target="--enable-targets=aarch64-darwin"
- else
- binutils_target=""
- fi
- else
- binutils_cc="$CC $SYSROOT_CFLAGS"
- binutils_target=""
- fi
- binutils_cflags="$binutils_cflags $MACHINE_FLAG $JVM_PICFLAG $C_O_FLAG_NORM"
-
- AC_MSG_NOTICE([Running binutils configure])
- AC_MSG_NOTICE([configure command line: ./configure --disable-nls CFLAGS="$binutils_cflags" CC="$binutils_cc" $binutils_target])
- saved_dir=`pwd`
- cd "$BINUTILS_SRC"
- ./configure --disable-nls CFLAGS="$binutils_cflags" CC="$binutils_cc" $binutils_target
- if test $? -ne 0 || ! test -e $BINUTILS_SRC/Makefile; then
- AC_MSG_NOTICE([Automatic building of binutils failed on configure. Try building it manually])
- AC_MSG_ERROR([Cannot continue])
- fi
- AC_MSG_NOTICE([Running binutils make])
- $MAKE all-opcodes
- if test $? -ne 0; then
- AC_MSG_NOTICE([Automatic building of binutils failed on make. Try building it manually])
- AC_MSG_ERROR([Cannot continue])
- fi
- cd $saved_dir
- AC_MSG_NOTICE([Building of binutils done])
- fi
-
- BINUTILS_DIR="$BINUTILS_SRC"
-])
-
-################################################################################
-#
-# Determine if hsdis should be built, and if so, with which backend.
-#
-AC_DEFUN_ONCE([JDKOPT_SETUP_HSDIS],
-[
- AC_ARG_WITH([hsdis], [AS_HELP_STRING([--with-hsdis],
- [what hsdis backend to use ('none', 'binutils') @<:@none@:>@])])
-
- AC_ARG_WITH([binutils], [AS_HELP_STRING([--with-binutils],
- [where to find the binutils files needed for hsdis/binutils])])
-
- AC_ARG_WITH([binutils-src], [AS_HELP_STRING([--with-binutils-src],
- [where to find the binutils source for building])])
-
- AC_MSG_CHECKING([what hsdis backend to use])
-
- if test "x$with_hsdis" = xyes; then
- AC_MSG_ERROR([--with-hsdis must have a value])
- elif test "x$with_hsdis" = xnone || test "x$with_hsdis" = xno || test "x$with_hsdis" = x; then
- HSDIS_BACKEND=none
- AC_MSG_RESULT(['none', hsdis will not be built])
- elif test "x$with_hsdis" = xbinutils; then
- HSDIS_BACKEND=binutils
- AC_MSG_RESULT(['binutils'])
-
- # We need the binutils static libs and includes.
- if test "x$with_binutils_src" != x; then
- # Try building the source first. If it succeeds, it sets $BINUTILS_DIR.
- JDKOPT_BUILD_BINUTILS
- fi
-
- if test "x$with_binutils" != x; then
- BINUTILS_DIR="$with_binutils"
- fi
-
- binutils_system_error=""
- HSDIS_LIBS=""
- if test "x$BINUTILS_DIR" = xsystem; then
- AC_CHECK_LIB(bfd, bfd_openr, [ HSDIS_LIBS="-lbfd" ], [ binutils_system_error="libbfd not found" ])
- AC_CHECK_LIB(opcodes, disassembler, [ HSDIS_LIBS="$HSDIS_LIBS -lopcodes" ], [ binutils_system_error="libopcodes not found" ])
- AC_CHECK_LIB(iberty, xmalloc, [ HSDIS_LIBS="$HSDIS_LIBS -liberty" ], [ binutils_system_error="libiberty not found" ])
- AC_CHECK_LIB(z, deflate, [ HSDIS_LIBS="$HSDIS_LIBS -lz" ], [ binutils_system_error="libz not found" ])
- HSDIS_CFLAGS="-DLIBARCH_$OPENJDK_TARGET_CPU_LEGACY_LIB"
- elif test "x$BINUTILS_DIR" != x; then
- if test -e $BINUTILS_DIR/bfd/libbfd.a && \
- test -e $BINUTILS_DIR/opcodes/libopcodes.a && \
- test -e $BINUTILS_DIR/libiberty/libiberty.a; then
- HSDIS_CFLAGS="-I$BINUTILS_DIR/include -I$BINUTILS_DIR/bfd -DLIBARCH_$OPENJDK_TARGET_CPU_LEGACY_LIB"
- HSDIS_LIBS="$BINUTILS_DIR/bfd/libbfd.a $BINUTILS_DIR/opcodes/libopcodes.a $BINUTILS_DIR/libiberty/libiberty.a $BINUTILS_DIR/zlib/libz.a"
- fi
- fi
-
- AC_MSG_CHECKING([for binutils to use with hsdis])
- case "x$BINUTILS_DIR" in
- xsystem)
- if test "x$OPENJDK_TARGET_OS" != xlinux; then
- AC_MSG_RESULT([invalid])
- AC_MSG_ERROR([binutils on system is supported for Linux only])
- elif test "x$binutils_system_error" = x; then
- AC_MSG_RESULT([system])
- HSDIS_CFLAGS="$HSDIS_CFLAGS -DSYSTEM_BINUTILS"
- else
- AC_MSG_RESULT([invalid])
- AC_MSG_ERROR([$binutils_system_error])
- fi
- ;;
- x)
- AC_MSG_RESULT([missing])
- AC_MSG_NOTICE([--with-hsdis=binutils requires specifying a binutils installation.])
- AC_MSG_NOTICE([Download binutils from https://www.gnu.org/software/binutils and unpack it,])
- AC_MSG_NOTICE([and point --with-binutils-src to the resulting directory, or use])
- AC_MSG_NOTICE([--with-binutils to point to a pre-built binutils installation.])
- AC_MSG_ERROR([Cannot continue])
- ;;
- *)
- if test "x$HSDIS_LIBS" != x; then
- AC_MSG_RESULT([$BINUTILS_DIR])
- else
- AC_MSG_RESULT([invalid])
- AC_MSG_ERROR([$BINUTILS_DIR does not contain a proper binutils installation])
- fi
- ;;
- esac
- else
- AC_MSG_RESULT([invalid])
- AC_MSG_ERROR([Incorrect hsdis backend "$with_hsdis"])
- fi
-
- AC_SUBST(HSDIS_BACKEND)
- AC_SUBST(HSDIS_CFLAGS)
- AC_SUBST(HSDIS_LIBS)
+ AC_SUBST(ISO_8601_FORMAT_STRING)
+ AC_SUBST(SOURCE_DATE_ISO_8601)
])
diff --git a/make/autoconf/jdk-version.m4 b/make/autoconf/jdk-version.m4
index 5e64ce9a064f86e2ee6a46e8ec86a108f60f31e4..41f4b1fb1211f77b68b4b1f17c3893d94abc0d89 100644
--- a/make/autoconf/jdk-version.m4
+++ b/make/autoconf/jdk-version.m4
@@ -72,7 +72,9 @@ AC_DEFUN_ONCE([JDKVER_SETUP_JDK_VERSION_NUMBERS],
# Setup username (for use in adhoc version strings etc)
AC_ARG_WITH([build-user], [AS_HELP_STRING([--with-build-user],
[build username to use in version strings])])
- if test "x$with_build_user" != x; then
+ if test "x$with_build_user" = xyes || test "x$with_build_user" = xno; then
+ AC_MSG_ERROR([--with-build-user must have a value])
+ elif test "x$with_build_user" != x; then
USERNAME="$with_build_user"
else
# Outer [ ] to quote m4.
@@ -84,7 +86,7 @@ AC_DEFUN_ONCE([JDKVER_SETUP_JDK_VERSION_NUMBERS],
AC_ARG_WITH(jdk-rc-name, [AS_HELP_STRING([--with-jdk-rc-name],
[Set JDK RC name. This is used for FileDescription and ProductName properties
of MS Windows binaries. @<:@not specified@:>@])])
- if test "x$with_jdk_rc_name" = xyes; then
+ if test "x$with_jdk_rc_name" = xyes || test "x$with_jdk_rc_name" = xno; then
AC_MSG_ERROR([--with-jdk-rc-name must have a value])
elif [ ! [[ $with_jdk_rc_name =~ ^[[:print:]]*$ ]] ]; then
AC_MSG_ERROR([--with-jdk-rc-name contains non-printing characters: $with_jdk_rc_name])
@@ -101,7 +103,7 @@ AC_DEFUN_ONCE([JDKVER_SETUP_JDK_VERSION_NUMBERS],
AC_ARG_WITH(vendor-name, [AS_HELP_STRING([--with-vendor-name],
[Set vendor name. Among others, used to set the 'java.vendor'
and 'java.vm.vendor' system properties. @<:@not specified@:>@])])
- if test "x$with_vendor_name" = xyes; then
+ if test "x$with_vendor_name" = xyes || test "x$with_vendor_name" = xno; then
AC_MSG_ERROR([--with-vendor-name must have a value])
elif [ ! [[ $with_vendor_name =~ ^[[:print:]]*$ ]] ]; then
AC_MSG_ERROR([--with-vendor-name contains non-printing characters: $with_vendor_name])
@@ -115,7 +117,7 @@ AC_DEFUN_ONCE([JDKVER_SETUP_JDK_VERSION_NUMBERS],
# The vendor URL, if any
AC_ARG_WITH(vendor-url, [AS_HELP_STRING([--with-vendor-url],
[Set the 'java.vendor.url' system property @<:@not specified@:>@])])
- if test "x$with_vendor_url" = xyes; then
+ if test "x$with_vendor_url" = xyes || test "x$with_vendor_url" = xno; then
AC_MSG_ERROR([--with-vendor-url must have a value])
elif [ ! [[ $with_vendor_url =~ ^[[:print:]]*$ ]] ]; then
AC_MSG_ERROR([--with-vendor-url contains non-printing characters: $with_vendor_url])
@@ -129,7 +131,7 @@ AC_DEFUN_ONCE([JDKVER_SETUP_JDK_VERSION_NUMBERS],
# The vendor bug URL, if any
AC_ARG_WITH(vendor-bug-url, [AS_HELP_STRING([--with-vendor-bug-url],
[Set the 'java.vendor.url.bug' system property @<:@not specified@:>@])])
- if test "x$with_vendor_bug_url" = xyes; then
+ if test "x$with_vendor_bug_url" = xyes || test "x$with_vendor_bug_url" = xno; then
AC_MSG_ERROR([--with-vendor-bug-url must have a value])
elif [ ! [[ $with_vendor_bug_url =~ ^[[:print:]]*$ ]] ]; then
AC_MSG_ERROR([--with-vendor-bug-url contains non-printing characters: $with_vendor_bug_url])
@@ -143,7 +145,7 @@ AC_DEFUN_ONCE([JDKVER_SETUP_JDK_VERSION_NUMBERS],
# The vendor VM bug URL, if any
AC_ARG_WITH(vendor-vm-bug-url, [AS_HELP_STRING([--with-vendor-vm-bug-url],
[Sets the bug URL which will be displayed when the VM crashes @<:@not specified@:>@])])
- if test "x$with_vendor_vm_bug_url" = xyes; then
+ if test "x$with_vendor_vm_bug_url" = xyes || test "x$with_vendor_vm_bug_url" = xno; then
AC_MSG_ERROR([--with-vendor-vm-bug-url must have a value])
elif [ ! [[ $with_vendor_vm_bug_url =~ ^[[:print:]]*$ ]] ]; then
AC_MSG_ERROR([--with-vendor-vm-bug-url contains non-printing characters: $with_vendor_vm_bug_url])
@@ -160,7 +162,7 @@ AC_DEFUN_ONCE([JDKVER_SETUP_JDK_VERSION_NUMBERS],
# override parts with more specific flags, since these are processed later.
AC_ARG_WITH(version-string, [AS_HELP_STRING([--with-version-string],
[Set version string @<:@calculated@:>@])])
- if test "x$with_version_string" = xyes; then
+ if test "x$with_version_string" = xyes || test "x$with_version_string" = xno; then
AC_MSG_ERROR([--with-version-string must have a value])
elif test "x$with_version_string" != x; then
# Additional [] needed to keep m4 from mangling shell constructs.
@@ -293,7 +295,7 @@ AC_DEFUN_ONCE([JDKVER_SETUP_JDK_VERSION_NUMBERS],
[with_version_feature_present=true], [with_version_feature_present=false])
if test "x$with_version_feature_present" = xtrue; then
- if test "x$with_version_feature" = xyes; then
+ if test "x$with_version_feature" = xyes || test "x$with_version_feature" = xno; then
AC_MSG_ERROR([--with-version-feature must have a value])
else
JDKVER_CHECK_AND_SET_NUMBER(VERSION_FEATURE, $with_version_feature)
@@ -480,7 +482,7 @@ AC_DEFUN_ONCE([JDKVER_SETUP_JDK_VERSION_NUMBERS],
# The version date
AC_ARG_WITH(version-date, [AS_HELP_STRING([--with-version-date],
[Set version date @<:@current source value@:>@])])
- if test "x$with_version_date" = xyes; then
+ if test "x$with_version_date" = xyes || test "x$with_version_date" = xno; then
AC_MSG_ERROR([--with-version-date must have a value])
elif test "x$with_version_date" != x; then
if [ ! [[ $with_version_date =~ ^[0-9]{4}-[0-9]{2}-[0-9]{2}$ ]] ]; then
@@ -499,7 +501,10 @@ AC_DEFUN_ONCE([JDKVER_SETUP_JDK_VERSION_NUMBERS],
AC_MSG_ERROR([--with-vendor-version-string must have a value])
elif [ ! [[ $with_vendor_version_string =~ ^[[:graph:]]*$ ]] ]; then
AC_MSG_ERROR([--with--vendor-version-string contains non-graphical characters: $with_vendor_version_string])
- else
+ elif test "x$with_vendor_version_string" != xno; then
+ # Set vendor version string if --without is not passed
+ # Check not required if an empty value is passed, since VENDOR_VERSION_STRING
+ # would then be set to ""
VENDOR_VERSION_STRING="$with_vendor_version_string"
fi
@@ -507,7 +512,7 @@ AC_DEFUN_ONCE([JDKVER_SETUP_JDK_VERSION_NUMBERS],
AC_ARG_WITH(macosx-bundle-name-base, [AS_HELP_STRING([--with-macosx-bundle-name-base],
[Set the MacOSX Bundle Name base. This is the base name for calculating MacOSX Bundle Names.
@<:@not specified@:>@])])
- if test "x$with_macosx_bundle_name_base" = xyes; then
+ if test "x$with_macosx_bundle_name_base" = xyes || test "x$with_macosx_bundle_name_base" = xno; then
AC_MSG_ERROR([--with-macosx-bundle-name-base must have a value])
elif [ ! [[ $with_macosx_bundle_name_base =~ ^[[:print:]]*$ ]] ]; then
AC_MSG_ERROR([--with-macosx-bundle-name-base contains non-printing characters: $with_macosx_bundle_name_base])
@@ -521,7 +526,7 @@ AC_DEFUN_ONCE([JDKVER_SETUP_JDK_VERSION_NUMBERS],
AC_ARG_WITH(macosx-bundle-id-base, [AS_HELP_STRING([--with-macosx-bundle-id-base],
[Set the MacOSX Bundle ID base. This is the base ID for calculating MacOSX Bundle IDs.
@<:@not specified@:>@])])
- if test "x$with_macosx_bundle_id_base" = xyes; then
+ if test "x$with_macosx_bundle_id_base" = xyes || test "x$with_macosx_bundle_id_base" = xno; then
AC_MSG_ERROR([--with-macosx-bundle-id-base must have a value])
elif [ ! [[ $with_macosx_bundle_id_base =~ ^[[:print:]]*$ ]] ]; then
AC_MSG_ERROR([--with-macosx-bundle-id-base contains non-printing characters: $with_macosx_bundle_id_base])
@@ -542,14 +547,19 @@ AC_DEFUN_ONCE([JDKVER_SETUP_JDK_VERSION_NUMBERS],
[Set the MacOSX Bundle CFBundleVersion field. This key is a machine-readable
string composed of one to three period-separated integers and should represent the
build version. Defaults to the build number.])])
- if test "x$with_macosx_bundle_build_version" = xyes; then
+ if test "x$with_macosx_bundle_build_version" = xyes || test "x$with_macosx_bundle_build_version" = xno; then
AC_MSG_ERROR([--with-macosx-bundle-build-version must have a value])
elif [ ! [[ $with_macosx_bundle_build_version =~ ^[0-9\.]*$ ]] ]; then
AC_MSG_ERROR([--with-macosx-bundle-build-version contains non numbers and periods: $with_macosx_bundle_build_version])
elif test "x$with_macosx_bundle_build_version" != x; then
MACOSX_BUNDLE_BUILD_VERSION="$with_macosx_bundle_build_version"
else
- MACOSX_BUNDLE_BUILD_VERSION="$VERSION_BUILD"
+ if test "x$VERSION_BUILD" != x; then
+ MACOSX_BUNDLE_BUILD_VERSION="$VERSION_BUILD"
+ else
+ MACOSX_BUNDLE_BUILD_VERSION=0
+ fi
+
# If VERSION_OPT consists of only numbers and periods, add it.
if [ [[ $VERSION_OPT =~ ^[0-9\.]+$ ]] ]; then
MACOSX_BUNDLE_BUILD_VERSION="$MACOSX_BUNDLE_BUILD_VERSION.$VERSION_OPT"
diff --git a/make/autoconf/lib-hsdis.m4 b/make/autoconf/lib-hsdis.m4
new file mode 100644
index 0000000000000000000000000000000000000000..f3e5da5f8690199cb86b1e4412e7e7ce19fc917c
--- /dev/null
+++ b/make/autoconf/lib-hsdis.m4
@@ -0,0 +1,336 @@
+#
+# Copyright (c) 2021, 2022, Oracle and/or its affiliates. All rights reserved.
+# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
+#
+# This code is free software; you can redistribute it and/or modify it
+# under the terms of the GNU General Public License version 2 only, as
+# published by the Free Software Foundation. Oracle designates this
+# particular file as subject to the "Classpath" exception as provided
+# by Oracle in the LICENSE file that accompanied this code.
+#
+# This code is distributed in the hope that it will be useful, but WITHOUT
+# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+# FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
+# version 2 for more details (a copy is included in the LICENSE file that
+# accompanied this code).
+#
+# You should have received a copy of the GNU General Public License version
+# 2 along with this work; if not, write to the Free Software Foundation,
+# Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
+#
+# Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA
+# or visit www.oracle.com if you need additional information or have any
+# questions.
+#
+
+################################################################################
+#
+# Helper function to setup hsdis using Capstone
+#
+AC_DEFUN([LIB_SETUP_HSDIS_CAPSTONE],
+[
+ AC_ARG_WITH(capstone, [AS_HELP_STRING([--with-capstone],
+ [where to find the Capstone files needed for hsdis/capstone])])
+
+ if test "x$with_capstone" != x; then
+ AC_MSG_CHECKING([for capstone])
+ CAPSTONE="$with_capstone"
+ AC_MSG_RESULT([$CAPSTONE])
+
+ HSDIS_CFLAGS="-I${CAPSTONE}/include/capstone"
+ if test "x$OPENJDK_TARGET_OS" != xwindows; then
+ HSDIS_LDFLAGS="-L${CAPSTONE}/lib"
+ HSDIS_LIBS="-lcapstone"
+ else
+ HSDIS_LDFLAGS="-nodefaultlib:libcmt.lib"
+ HSDIS_LIBS="${CAPSTONE}/capstone.lib"
+ fi
+ else
+ if test "x$OPENJDK_TARGET_OS" = xwindows; then
+ # There is no way to auto-detect capstone on Windowos
+ AC_MSG_NOTICE([You must specify capstone location using --with-capstone=])
+ AC_MSG_ERROR([Cannot continue])
+ fi
+
+ PKG_CHECK_MODULES(CAPSTONE, capstone, [CAPSTONE_FOUND=yes], [CAPSTONE_FOUND=no])
+ if test "x$CAPSTONE_FOUND" = xyes; then
+ HSDIS_CFLAGS="$CAPSTONE_CFLAGS"
+ HSDIS_LDFLAGS="$CAPSTONE_LDFLAGS"
+ HSDIS_LIBS="$CAPSTONE_LIBS"
+ else
+ HELP_MSG_MISSING_DEPENDENCY([capstone])
+ AC_MSG_NOTICE([Cannot locate capstone which is needed for hsdis/capstone. Try using --with-capstone=. $HELP_MSG])
+ AC_MSG_ERROR([Cannot continue])
+ fi
+ fi
+])
+
+################################################################################
+#
+# Helper function to setup hsdis using LLVM
+#
+AC_DEFUN([LIB_SETUP_HSDIS_LLVM],
+[
+ AC_ARG_WITH([llvm], [AS_HELP_STRING([--with-llvm],
+ [where to find the LLVM files needed for hsdis/llvm])])
+
+ if test "x$with_llvm" != x; then
+ LLVM_DIR="$with_llvm"
+ fi
+
+ if test "x$OPENJDK_TARGET_OS" != xwindows; then
+ if test "x$LLVM_DIR" = x; then
+ # Macs with homebrew can have llvm in different places
+ UTIL_LOOKUP_PROGS(LLVM_CONFIG, llvm-config, [$PATH:/usr/local/opt/llvm/bin:/opt/homebrew/opt/llvm/bin])
+ if test "x$LLVM_CONFIG" = x; then
+ AC_MSG_NOTICE([Cannot locate llvm-config which is needed for hsdis/llvm. Try using --with-llvm=.])
+ AC_MSG_ERROR([Cannot continue])
+ fi
+ else
+ UTIL_LOOKUP_PROGS(LLVM_CONFIG, llvm-config, [$LLVM_DIR/bin])
+ if test "x$LLVM_CONFIG" = x; then
+ AC_MSG_NOTICE([Cannot locate llvm-config in $LLVM_DIR. Check your --with-llvm argument.])
+ AC_MSG_ERROR([Cannot continue])
+ fi
+ fi
+
+ # We need the LLVM flags and libs, and llvm-config provides them for us.
+ HSDIS_CFLAGS=`$LLVM_CONFIG --cflags`
+ HSDIS_LDFLAGS=`$LLVM_CONFIG --ldflags`
+ HSDIS_LIBS=`$LLVM_CONFIG --libs $OPENJDK_TARGET_CPU_ARCH ${OPENJDK_TARGET_CPU_ARCH}disassembler`
+ else
+ if test "x$LLVM_DIR" = x; then
+ AC_MSG_NOTICE([--with-llvm is needed on Windows to point out the LLVM home])
+ AC_MSG_ERROR([Cannot continue])
+ fi
+
+ # Official Windows installation of LLVM do not ship llvm-config, and self-built llvm-config
+ # produced unusable output, so just ignore it on Windows.
+ if ! test -e $LLVM_DIR/include/llvm-c/lto.h; then
+ AC_MSG_NOTICE([$LLVM_DIR does not seem like a valid LLVM home; include dir is missing])
+ AC_MSG_ERROR([Cannot continue])
+ fi
+ if ! test -e $LLVM_DIR/include/llvm-c/Disassembler.h; then
+ AC_MSG_NOTICE([$LLVM_DIR does not point to a complete LLVM installation. ])
+ AC_MSG_NOTICE([The official LLVM distribution is missing crucical files; you need to build LLVM yourself or get all include files elsewhere])
+ AC_MSG_ERROR([Cannot continue])
+ fi
+ if ! test -e $LLVM_DIR/lib/llvm-c.lib; then
+ AC_MSG_NOTICE([$LLVM_DIR does not seem like a valid LLVM home; lib dir is missing])
+ AC_MSG_ERROR([Cannot continue])
+ fi
+ HSDIS_CFLAGS="-I$LLVM_DIR/include"
+ HSDIS_LDFLAGS="-libpath:$LLVM_DIR/lib"
+ HSDIS_LIBS="llvm-c.lib"
+ fi
+])
+
+################################################################################
+#
+# Helper function to build binutils from source.
+#
+AC_DEFUN([LIB_BUILD_BINUTILS],
+[
+ BINUTILS_SRC="$with_binutils_src"
+ UTIL_FIXUP_PATH(BINUTILS_SRC)
+
+ if ! test -d $BINUTILS_SRC; then
+ AC_MSG_ERROR([--with-binutils-src is not pointing to a directory])
+ fi
+ if ! test -x $BINUTILS_SRC/configure; then
+ AC_MSG_ERROR([--with-binutils-src does not look like a binutils source directory])
+ fi
+
+ if test -e $BINUTILS_SRC/bfd/libbfd.a && \
+ test -e $BINUTILS_SRC/opcodes/libopcodes.a && \
+ test -e $BINUTILS_SRC/libiberty/libiberty.a && \
+ test -e $BINUTILS_SRC/zlib/libz.a; then
+ AC_MSG_NOTICE([Found binutils binaries in binutils source directory -- not building])
+ else
+ # On Windows, we cannot build with the normal Microsoft CL, but must instead use
+ # a separate mingw toolchain.
+ if test "x$OPENJDK_BUILD_OS" = xwindows; then
+ if test "x$OPENJDK_TARGET_CPU" = "xx86"; then
+ target_base="i686-w64-mingw32"
+ else
+ target_base="$OPENJDK_TARGET_CPU-w64-mingw32"
+ fi
+ binutils_cc="$target_base-gcc"
+ binutils_target="--host=$target_base --target=$target_base"
+ # Somehow the uint typedef is not included when building with mingw
+ binutils_cflags="-Duint=unsigned"
+ compiler_version=`$binutils_cc --version 2>&1`
+ if ! [ [[ "$compiler_version" =~ GCC ]] ]; then
+ AC_MSG_NOTICE([Could not find correct mingw compiler $binutils_cc.])
+ HELP_MSG_MISSING_DEPENDENCY([$binutils_cc])
+ AC_MSG_ERROR([Cannot continue. $HELP_MSG])
+ else
+ AC_MSG_NOTICE([Using compiler $binutils_cc with version $compiler_version])
+ fi
+ elif test "x$OPENJDK_BUILD_OS" = xmacosx; then
+ if test "x$OPENJDK_TARGET_CPU" = "xaarch64"; then
+ binutils_target="--enable-targets=aarch64-darwin"
+ else
+ binutils_target=""
+ fi
+ else
+ binutils_cc="$CC $SYSROOT_CFLAGS"
+ binutils_target=""
+ fi
+ binutils_cflags="$binutils_cflags $MACHINE_FLAG $JVM_PICFLAG $C_O_FLAG_NORM"
+
+ AC_MSG_NOTICE([Running binutils configure])
+ AC_MSG_NOTICE([configure command line: ./configure --disable-nls CFLAGS="$binutils_cflags" CC="$binutils_cc" $binutils_target])
+ saved_dir=`pwd`
+ cd "$BINUTILS_SRC"
+ ./configure --disable-nls CFLAGS="$binutils_cflags" CC="$binutils_cc" $binutils_target
+ if test $? -ne 0 || ! test -e $BINUTILS_SRC/Makefile; then
+ AC_MSG_NOTICE([Automatic building of binutils failed on configure. Try building it manually])
+ AC_MSG_ERROR([Cannot continue])
+ fi
+ AC_MSG_NOTICE([Running binutils make])
+ $MAKE all-opcodes
+ if test $? -ne 0; then
+ AC_MSG_NOTICE([Automatic building of binutils failed on make. Try building it manually])
+ AC_MSG_ERROR([Cannot continue])
+ fi
+ cd $saved_dir
+ AC_MSG_NOTICE([Building of binutils done])
+ fi
+
+ BINUTILS_DIR="$BINUTILS_SRC"
+])
+
+################################################################################
+#
+# Helper function to setup hsdis using binutils
+#
+AC_DEFUN([LIB_SETUP_HSDIS_BINUTILS],
+[
+ AC_ARG_WITH([binutils], [AS_HELP_STRING([--with-binutils],
+ [where to find the binutils files needed for hsdis/binutils])])
+
+ AC_ARG_WITH([binutils-src], [AS_HELP_STRING([--with-binutils-src],
+ [where to find the binutils source for building])])
+
+ # We need the binutils static libs and includes.
+ if test "x$with_binutils_src" != x; then
+ # Try building the source first. If it succeeds, it sets $BINUTILS_DIR.
+ LIB_BUILD_BINUTILS
+ fi
+
+ if test "x$with_binutils" != x; then
+ BINUTILS_DIR="$with_binutils"
+ fi
+
+ binutils_system_error=""
+ HSDIS_LIBS=""
+ if test "x$BINUTILS_DIR" = xsystem; then
+ AC_CHECK_LIB(bfd, bfd_openr, [ HSDIS_LIBS="-lbfd" ], [ binutils_system_error="libbfd not found" ])
+ AC_CHECK_LIB(opcodes, disassembler, [ HSDIS_LIBS="$HSDIS_LIBS -lopcodes" ], [ binutils_system_error="libopcodes not found" ])
+ AC_CHECK_LIB(iberty, xmalloc, [ HSDIS_LIBS="$HSDIS_LIBS -liberty" ], [ binutils_system_error="libiberty not found" ])
+ AC_CHECK_LIB(z, deflate, [ HSDIS_LIBS="$HSDIS_LIBS -lz" ], [ binutils_system_error="libz not found" ])
+ HSDIS_CFLAGS="-DLIBARCH_$OPENJDK_TARGET_CPU_LEGACY_LIB"
+ elif test "x$BINUTILS_DIR" != x; then
+ if test -e $BINUTILS_DIR/bfd/libbfd.a && \
+ test -e $BINUTILS_DIR/opcodes/libopcodes.a && \
+ test -e $BINUTILS_DIR/libiberty/libiberty.a; then
+ HSDIS_CFLAGS="-I$BINUTILS_DIR/include -I$BINUTILS_DIR/bfd -DLIBARCH_$OPENJDK_TARGET_CPU_LEGACY_LIB"
+ HSDIS_LDFLAGS=""
+ HSDIS_LIBS="$BINUTILS_DIR/bfd/libbfd.a $BINUTILS_DIR/opcodes/libopcodes.a $BINUTILS_DIR/libiberty/libiberty.a $BINUTILS_DIR/zlib/libz.a"
+ fi
+ fi
+
+ AC_MSG_CHECKING([for binutils to use with hsdis])
+ case "x$BINUTILS_DIR" in
+ xsystem)
+ if test "x$OPENJDK_TARGET_OS" != xlinux; then
+ AC_MSG_RESULT([invalid])
+ AC_MSG_ERROR([binutils on system is supported for Linux only])
+ elif test "x$binutils_system_error" = x; then
+ AC_MSG_RESULT([system])
+ HSDIS_CFLAGS="$HSDIS_CFLAGS -DSYSTEM_BINUTILS"
+ else
+ AC_MSG_RESULT([invalid])
+ AC_MSG_ERROR([$binutils_system_error])
+ fi
+ ;;
+ x)
+ AC_MSG_RESULT([missing])
+ AC_MSG_NOTICE([--with-hsdis=binutils requires specifying a binutils installation.])
+ AC_MSG_NOTICE([Download binutils from https://www.gnu.org/software/binutils and unpack it,])
+ AC_MSG_NOTICE([and point --with-binutils-src to the resulting directory, or use])
+ AC_MSG_NOTICE([--with-binutils to point to a pre-built binutils installation.])
+ AC_MSG_ERROR([Cannot continue])
+ ;;
+ *)
+ if test "x$HSDIS_LIBS" != x; then
+ AC_MSG_RESULT([$BINUTILS_DIR])
+ else
+ AC_MSG_RESULT([invalid])
+ AC_MSG_ERROR([$BINUTILS_DIR does not contain a proper binutils installation])
+ fi
+ ;;
+ esac
+])
+
+################################################################################
+#
+# Determine if hsdis should be built, and if so, with which backend.
+#
+AC_DEFUN_ONCE([LIB_SETUP_HSDIS],
+[
+ AC_ARG_WITH([hsdis], [AS_HELP_STRING([--with-hsdis],
+ [what hsdis backend to use ('none', 'capstone', 'llvm', 'binutils') @<:@none@:>@])])
+
+ UTIL_ARG_ENABLE(NAME: hsdis-bundling, DEFAULT: false,
+ RESULT: ENABLE_HSDIS_BUNDLING,
+ DESC: [enable bundling of hsdis to allow HotSpot disassembly out-of-the-box])
+
+ AC_MSG_CHECKING([what hsdis backend to use])
+
+ if test "x$with_hsdis" = xyes; then
+ AC_MSG_ERROR([--with-hsdis must have a value])
+ elif test "x$with_hsdis" = xnone || test "x$with_hsdis" = xno || test "x$with_hsdis" = x; then
+ HSDIS_BACKEND=none
+ AC_MSG_RESULT(['none', hsdis will not be built])
+ elif test "x$with_hsdis" = xcapstone; then
+ HSDIS_BACKEND=capstone
+ AC_MSG_RESULT(['capstone'])
+
+ LIB_SETUP_HSDIS_CAPSTONE
+ elif test "x$with_hsdis" = xllvm; then
+ HSDIS_BACKEND=llvm
+ AC_MSG_RESULT(['llvm'])
+
+ LIB_SETUP_HSDIS_LLVM
+ elif test "x$with_hsdis" = xbinutils; then
+ HSDIS_BACKEND=binutils
+ AC_MSG_RESULT(['binutils'])
+
+ LIB_SETUP_HSDIS_BINUTILS
+ else
+ AC_MSG_RESULT([invalid])
+ AC_MSG_ERROR([Incorrect hsdis backend "$with_hsdis"])
+ fi
+
+ AC_SUBST(HSDIS_BACKEND)
+ AC_SUBST(HSDIS_CFLAGS)
+ AC_SUBST(HSDIS_LDFLAGS)
+ AC_SUBST(HSDIS_LIBS)
+
+ AC_MSG_CHECKING([if hsdis should be bundled])
+ if test "x$ENABLE_HSDIS_BUNDLING" = "xtrue"; then
+ if test "x$HSDIS_BACKEND" = xnone; then
+ AC_MSG_RESULT([no, backend missing])
+ AC_MSG_ERROR([hsdis-bundling requires a hsdis backend. Please set --with-hsdis=]);
+ fi
+ AC_MSG_RESULT([yes])
+ if test "x$HSDIS_BACKEND" = xbinutils; then
+ AC_MSG_WARN([The resulting build might not be redistributable. Seek legal advice before distributing.])
+ fi
+ else
+ AC_MSG_RESULT([no])
+ fi
+ AC_SUBST(ENABLE_HSDIS_BUNDLING)
+])
diff --git a/make/autoconf/libraries.m4 b/make/autoconf/libraries.m4
index 8e4012910d890774acb9a2769c147e1b14f9951e..fbc8ee7b9c8638fd639e3c34076a25bbce04dc4c 100644
--- a/make/autoconf/libraries.m4
+++ b/make/autoconf/libraries.m4
@@ -28,10 +28,12 @@ m4_include([lib-alsa.m4])
m4_include([lib-bundled.m4])
m4_include([lib-cups.m4])
m4_include([lib-ffi.m4])
+m4_include([lib-fontconfig.m4])
m4_include([lib-freetype.m4])
+m4_include([lib-hsdis.m4])
m4_include([lib-std.m4])
m4_include([lib-x11.m4])
-m4_include([lib-fontconfig.m4])
+
m4_include([lib-tests.m4])
################################################################################
@@ -93,14 +95,17 @@ AC_DEFUN_ONCE([LIB_DETERMINE_DEPENDENCIES],
AC_DEFUN_ONCE([LIB_SETUP_LIBRARIES],
[
LIB_SETUP_STD_LIBS
- LIB_SETUP_X11
+
+ LIB_SETUP_ALSA
+ LIB_SETUP_BUNDLED_LIBS
LIB_SETUP_CUPS
LIB_SETUP_FONTCONFIG
LIB_SETUP_FREETYPE
- LIB_SETUP_ALSA
+ LIB_SETUP_HSDIS
LIB_SETUP_LIBFFI
- LIB_SETUP_BUNDLED_LIBS
LIB_SETUP_MISC_LIBS
+ LIB_SETUP_X11
+
LIB_TESTS_SETUP_GTEST
BASIC_JDKLIB_LIBS=""
diff --git a/make/autoconf/spec.gmk.in b/make/autoconf/spec.gmk.in
index 3dce730970e7d180a3889b8f37e0c56907356b73..5671d4a9f3e5aa1a485ca38f6232598e29e61363 100644
--- a/make/autoconf/spec.gmk.in
+++ b/make/autoconf/spec.gmk.in
@@ -1,5 +1,5 @@
#
-# Copyright (c) 2011, 2021, Oracle and/or its affiliates. All rights reserved.
+# Copyright (c) 2011, 2022, Oracle and/or its affiliates. All rights reserved.
# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
#
# This code is free software; you can redistribute it and/or modify it
@@ -130,6 +130,13 @@ RELEASE_FILE_LIBC:=@RELEASE_FILE_LIBC@
SOURCE_DATE := @SOURCE_DATE@
ENABLE_REPRODUCIBLE_BUILD := @ENABLE_REPRODUCIBLE_BUILD@
+ISO_8601_FORMAT_STRING := @ISO_8601_FORMAT_STRING@
+
+ifneq ($(SOURCE_DATE), updated)
+ # For "updated" source date value, these are set in InitSupport.gmk
+ export SOURCE_DATE_EPOCH := $(SOURCE_DATE)
+ SOURCE_DATE_ISO_8601 := @SOURCE_DATE_ISO_8601@
+endif
LIBM:=@LIBM@
LIBDL:=@LIBDL@
@@ -360,7 +367,9 @@ ENABLE_COMPATIBLE_CDS_ALIGNMENT := @ENABLE_COMPATIBLE_CDS_ALIGNMENT@
ALLOW_ABSOLUTE_PATHS_IN_OUTPUT := @ALLOW_ABSOLUTE_PATHS_IN_OUTPUT@
HSDIS_BACKEND := @HSDIS_BACKEND@
+ENABLE_HSDIS_BUNDLING := @ENABLE_HSDIS_BUNDLING@
HSDIS_CFLAGS := @HSDIS_CFLAGS@
+HSDIS_LDFLAGS := @HSDIS_LDFLAGS@
HSDIS_LIBS := @HSDIS_LIBS@
# The boot jdk to use. This is overridden in bootcycle-spec.gmk. Make sure to keep
@@ -406,6 +415,7 @@ LIBFFI_CFLAGS:=@LIBFFI_CFLAGS@
ENABLE_LIBFFI_BUNDLING:=@ENABLE_LIBFFI_BUNDLING@
LIBFFI_LIB_FILE:=@LIBFFI_LIB_FILE@
FILE_MACRO_CFLAGS := @FILE_MACRO_CFLAGS@
+BRANCH_PROTECTION_CFLAGS := @BRANCH_PROTECTION_CFLAGS@
STATIC_LIBS_CFLAGS := @STATIC_LIBS_CFLAGS@
@@ -581,7 +591,6 @@ AR := @AR@
ARFLAGS:=@ARFLAGS@
NM:=@NM@
-GNM:=@GNM@
STRIP:=@STRIP@
OBJDUMP:=@OBJDUMP@
CXXFILT:=@CXXFILT@
diff --git a/make/autoconf/toolchain.m4 b/make/autoconf/toolchain.m4
index 5280520b78bf18dad743c1d03e70deb4c7d52b9a..b79d161331d273c5bd456c004ad39ea79cc5f5a6 100644
--- a/make/autoconf/toolchain.m4
+++ b/make/autoconf/toolchain.m4
@@ -39,7 +39,7 @@ VALID_TOOLCHAINS_all="gcc clang xlc microsoft"
# These toolchains are valid on different platforms
VALID_TOOLCHAINS_linux="gcc clang"
-VALID_TOOLCHAINS_macosx="gcc clang"
+VALID_TOOLCHAINS_macosx="clang"
VALID_TOOLCHAINS_aix="xlc"
VALID_TOOLCHAINS_windows="microsoft"
@@ -772,8 +772,6 @@ AC_DEFUN_ONCE([TOOLCHAIN_DETECT_TOOLCHAIN_EXTRA],
else
UTIL_LOOKUP_TOOLCHAIN_PROGS(NM, nm)
fi
- GNM="$NM"
- AC_SUBST(GNM)
fi
# objcopy is used for moving debug symbols to separate files when
@@ -903,8 +901,8 @@ AC_DEFUN_ONCE([TOOLCHAIN_SETUP_BUILD_COMPILERS],
BUILD_LDCXX="$BUILD_LD"
else
if test "x$OPENJDK_BUILD_OS" = xmacosx; then
- UTIL_REQUIRE_PROGS(BUILD_CC, clang cc gcc)
- UTIL_REQUIRE_PROGS(BUILD_CXX, clang++ CC g++)
+ UTIL_REQUIRE_PROGS(BUILD_CC, clang)
+ UTIL_REQUIRE_PROGS(BUILD_CXX, clang++)
else
UTIL_REQUIRE_PROGS(BUILD_CC, cc gcc)
UTIL_REQUIRE_PROGS(BUILD_CXX, CC g++)
diff --git a/make/autoconf/toolchain_microsoft.m4 b/make/autoconf/toolchain_microsoft.m4
index 2e02c531da7818f41327803802b0891fec8cf02c..03d4ae50dfb0165d49e28289a4f62b46658ba484 100644
--- a/make/autoconf/toolchain_microsoft.m4
+++ b/make/autoconf/toolchain_microsoft.m4
@@ -481,6 +481,7 @@ AC_DEFUN([TOOLCHAIN_CHECK_POSSIBLE_MSVC_DLL],
AC_DEFUN([TOOLCHAIN_SETUP_MSVC_DLL],
[
DLL_NAME="$1"
+ DLL_HELP="$2"
MSVC_DLL=
if test "x$OPENJDK_TARGET_CPU" = xx86; then
@@ -565,7 +566,7 @@ AC_DEFUN([TOOLCHAIN_SETUP_MSVC_DLL],
if test "x$MSVC_DLL" = x; then
AC_MSG_CHECKING([for $DLL_NAME])
AC_MSG_RESULT([no])
- AC_MSG_ERROR([Could not find $DLL_NAME. Please specify using --with-msvcr-dll.])
+ AC_MSG_ERROR([Could not find $DLL_NAME. Please specify using ${DLL_HELP}.])
fi
])
@@ -588,7 +589,7 @@ AC_DEFUN([TOOLCHAIN_SETUP_VS_RUNTIME_DLLS],
fi
MSVCR_DLL="$MSVC_DLL"
else
- TOOLCHAIN_SETUP_MSVC_DLL([${MSVCR_NAME}])
+ TOOLCHAIN_SETUP_MSVC_DLL([${MSVCR_NAME}], [--with-msvcr-dll])
MSVCR_DLL="$MSVC_DLL"
fi
AC_SUBST(MSVCR_DLL)
@@ -611,7 +612,7 @@ AC_DEFUN([TOOLCHAIN_SETUP_VS_RUNTIME_DLLS],
fi
MSVCP_DLL="$MSVC_DLL"
else
- TOOLCHAIN_SETUP_MSVC_DLL([${MSVCP_NAME}])
+ TOOLCHAIN_SETUP_MSVC_DLL([${MSVCP_NAME}], [--with-msvcp-dll])
MSVCP_DLL="$MSVC_DLL"
fi
AC_SUBST(MSVCP_DLL)
@@ -636,7 +637,7 @@ AC_DEFUN([TOOLCHAIN_SETUP_VS_RUNTIME_DLLS],
fi
VCRUNTIME_1_DLL="$MSVC_DLL"
else
- TOOLCHAIN_SETUP_MSVC_DLL([${VCRUNTIME_1_NAME}])
+ TOOLCHAIN_SETUP_MSVC_DLL([${VCRUNTIME_1_NAME}], [--with-vcruntime-1-dll])
VCRUNTIME_1_DLL="$MSVC_DLL"
fi
fi
diff --git a/make/autoconf/util.m4 b/make/autoconf/util.m4
index 877165ab3a364e62a5157856ccf554869084fbb7..15f41abafda9b492c112836b28e304c683e55c32 100644
--- a/make/autoconf/util.m4
+++ b/make/autoconf/util.m4
@@ -1,5 +1,5 @@
#
-# Copyright (c) 2011, 2021, Oracle and/or its affiliates. All rights reserved.
+# Copyright (c) 2011, 2022, Oracle and/or its affiliates. All rights reserved.
# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
#
# This code is free software; you can redistribute it and/or modify it
@@ -230,8 +230,6 @@ AC_DEFUN([UTIL_GET_MATCHING_VALUES],
# Converts an ISO-8601 date/time string to a unix epoch timestamp. If no
# suitable conversion method was found, an empty string is returned.
#
-# Sets the specified variable to the resulting list.
-#
# $1: result variable name
# $2: input date/time string
AC_DEFUN([UTIL_GET_EPOCH_TIMESTAMP],
@@ -241,11 +239,11 @@ AC_DEFUN([UTIL_GET_EPOCH_TIMESTAMP],
timestamp=$($DATE --utc --date=$2 +"%s" 2> /dev/null)
else
# BSD date
- timestamp=$($DATE -u -j -f "%F %T" "$2" "+%s" 2> /dev/null)
+ timestamp=$($DATE -u -j -f "%FZ %TZ" "$2" "+%s" 2> /dev/null)
if test "x$timestamp" = x; then
- # Perhaps the time was missing
- timestamp=$($DATE -u -j -f "%F %T" "$2 00:00:00" "+%s" 2> /dev/null)
- # If this did not work, we give up and return the empty string
+ # BSD date cannot handle trailing milliseconds.
+ # Try again ignoring characters at end
+ timestamp=$($DATE -u -j -f "%Y-%m-%dT%H:%M:%S" "$2" "+%s" 2> /dev/null)
fi
fi
$1=$timestamp
diff --git a/make/common/JarArchive.gmk b/make/common/JarArchive.gmk
index 5a87e4714288ff3dc570bb9747b3af45b29023e3..26b08fc1509017fd18f138bbc8468a46f308c305 100644
--- a/make/common/JarArchive.gmk
+++ b/make/common/JarArchive.gmk
@@ -1,5 +1,5 @@
#
-# Copyright (c) 2011, 2021, Oracle and/or its affiliates. All rights reserved.
+# Copyright (c) 2011, 2022, Oracle and/or its affiliates. All rights reserved.
# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
#
# This code is free software; you can redistribute it and/or modify it
@@ -193,7 +193,8 @@ define SetupJarArchiveBody
$1_UPDATE_CONTENTS=\
if [ "`$(WC) -l $$($1_BIN)/_the.$$($1_JARNAME)_contents | $(AWK) '{ print $$$$1 }'`" -gt "0" ]; then \
$(ECHO) " updating" `$(WC) -l $$($1_BIN)/_the.$$($1_JARNAME)_contents | $(AWK) '{ print $$$$1 }'` files && \
- $$($1_JAR_CMD) --update $$($1_JAR_OPTIONS) --file $$@ @$$($1_BIN)/_the.$$($1_JARNAME)_contents; \
+ $(SORT) $$($1_BIN)/_the.$$($1_JARNAME)_contents > $$($1_BIN)/_the.$$($1_JARNAME)_contents_sorted && \
+ $$($1_JAR_CMD) --update $$($1_JAR_OPTIONS) --file $$@ @$$($1_BIN)/_the.$$($1_JARNAME)_contents_sorted; \
fi $$(NEWLINE)
# The s-variants of the above macros are used when the jar is created from scratch.
# NOTICE: please leave the parentheses space separated otherwise the AIX build will break!
@@ -212,7 +213,9 @@ define SetupJarArchiveBody
| $(SED) 's|$$(src)/|-C $$(src) |g' >> \
$$($1_BIN)/_the.$$($1_JARNAME)_contents) $$(NEWLINE) )
endif
- $1_SUPDATE_CONTENTS=$$($1_JAR_CMD) --update $$($1_JAR_OPTIONS) --file $$@ @$$($1_BIN)/_the.$$($1_JARNAME)_contents $$(NEWLINE)
+ $1_SUPDATE_CONTENTS=\
+ $(SORT) $$($1_BIN)/_the.$$($1_JARNAME)_contents > $$($1_BIN)/_the.$$($1_JARNAME)_contents_sorted && \
+ $$($1_JAR_CMD) --update $$($1_JAR_OPTIONS) --file $$@ @$$($1_BIN)/_the.$$($1_JARNAME)_contents_sorted $$(NEWLINE)
# Use a slightly shorter name for logging, but with enough path to identify this jar.
$1_NAME:=$$(subst $$(OUTPUTDIR)/,,$$($1_JAR))
diff --git a/make/common/modules/LauncherCommon.gmk b/make/common/modules/LauncherCommon.gmk
index 7ad0375e2e38ff31419eb47d028a652c2dead647..85056bbe40f0c076ba7d2c9738b1c7a0be776a85 100644
--- a/make/common/modules/LauncherCommon.gmk
+++ b/make/common/modules/LauncherCommon.gmk
@@ -1,5 +1,5 @@
#
-# Copyright (c) 2011, 2020, Oracle and/or its affiliates. All rights reserved.
+# Copyright (c) 2011, 2022, Oracle and/or its affiliates. All rights reserved.
# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
#
# This code is free software; you can redistribute it and/or modify it
@@ -33,13 +33,14 @@ include ToolsJdk.gmk
# On Mac, we have always exported all symbols, probably due to oversight
# and/or misunderstanding. To emulate this, don't hide any symbols
# by default.
-# On AIX/xlc we need at least xlc 13.1 for the symbol hiding (see JDK-8214063)
# Also provide an override for non-conformant libraries.
ifeq ($(TOOLCHAIN_TYPE), gcc)
LAUNCHER_CFLAGS += -fvisibility=hidden
LDFLAGS_JDKEXE += -Wl,--exclude-libs,ALL
else ifeq ($(TOOLCHAIN_TYPE), clang)
LAUNCHER_CFLAGS += -fvisibility=hidden
+else ifeq ($(TOOLCHAIN_TYPE), xlc)
+ LAUNCHER_CFLAGS += -qvisibility=hidden
endif
LAUNCHER_SRC := $(TOPDIR)/src/java.base/share/native/launcher
diff --git a/make/common/modules/LibCommon.gmk b/make/common/modules/LibCommon.gmk
index 8ca3ddfffe9a606186ad0103fb4928330a390085..aa5c9f0a5c6b87415f853d88915335e5e9ad314a 100644
--- a/make/common/modules/LibCommon.gmk
+++ b/make/common/modules/LibCommon.gmk
@@ -1,5 +1,5 @@
#
-# Copyright (c) 2011, 2020, Oracle and/or its affiliates. All rights reserved.
+# Copyright (c) 2011, 2022, Oracle and/or its affiliates. All rights reserved.
# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
#
# This code is free software; you can redistribute it and/or modify it
@@ -36,7 +36,6 @@ WIN_JAVA_LIB := $(SUPPORT_OUTPUTDIR)/native/java.base/libjava/java.lib
# On Mac, we have always exported all symbols, probably due to oversight
# and/or misunderstanding. To emulate this, don't hide any symbols
# by default.
-# On AIX/xlc we need at least xlc 13.1 for the symbol hiding (see JDK-8214063)
# Also provide an override for non-conformant libraries.
ifeq ($(TOOLCHAIN_TYPE), gcc)
CFLAGS_JDKLIB += -fvisibility=hidden
@@ -47,6 +46,10 @@ else ifeq ($(TOOLCHAIN_TYPE), clang)
CFLAGS_JDKLIB += -fvisibility=hidden
CXXFLAGS_JDKLIB += -fvisibility=hidden
EXPORT_ALL_SYMBOLS := -fvisibility=default
+else ifeq ($(TOOLCHAIN_TYPE), xlc)
+ CFLAGS_JDKLIB += -qvisibility=hidden
+ CXXFLAGS_JDKLIB += -qvisibility=hidden
+ EXPORT_ALL_SYMBOLS := -qvisibility=default
endif
# Put the libraries here.
diff --git a/make/conf/jib-profiles.js b/make/conf/jib-profiles.js
index e0041e9185130fb591ae6577c61ae9fe2bc9cc47..f16d7fd12e717f31831a949c7f3aafc2f3b6f0a2 100644
--- a/make/conf/jib-profiles.js
+++ b/make/conf/jib-profiles.js
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2015, 2021, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2015, 2022, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -258,7 +258,6 @@ var getJibProfilesCommon = function (input, data) {
common.release_profile_base = {
configure_args: [
"--enable-reproducible-build",
- "--with-source-date=current",
],
};
// Extra settings for debug profiles
@@ -1053,10 +1052,10 @@ var getJibProfilesProfiles = function (input, common, data) {
var getJibProfilesDependencies = function (input, common) {
var devkit_platform_revisions = {
- linux_x64: "gcc10.3.0-OL6.4+1.0",
+ linux_x64: "gcc11.2.0-OL6.4+1.0",
macosx: "Xcode12.4+1.0",
windows_x64: "VS2019-16.9.3+1.0",
- linux_aarch64: "gcc10.3.0-OL7.6+1.0",
+ linux_aarch64: "gcc11.2.0-OL7.6+1.0",
linux_arm: "gcc8.2.0-Fedora27+1.0",
linux_ppc64le: "gcc8.2.0-Fedora27+1.0",
linux_s390x: "gcc8.2.0-Fedora27+1.0"
@@ -1424,7 +1423,10 @@ var getVersion = function (feature, interim, update, patch, extra1, extra2, extr
* other version inputs
*/
var versionArgs = function(input, common) {
- var args = ["--with-version-build=" + common.build_number];
+ var args = [];
+ if (common.build_number != 0) {
+ args = concat(args, "--with-version-build=" + common.build_number);
+ }
if (input.build_type == "promoted") {
args = concat(args,
"--with-version-pre=" + version_numbers.get("DEFAULT_PROMOTED_VERSION_PRE"),
@@ -1444,6 +1446,14 @@ var versionArgs = function(input, common) {
} else {
args = concat(args, "--with-version-opt=" + common.build_id);
}
+ var sourceDate
+ if (input.build_id_data && input.build_id_data.creationTime) {
+ sourceDate = Math.floor(Date.parse(input.build_id_data.creationTime)/1000);
+ } else {
+ sourceDate = "current";
+ }
+ args = concat(args, "--with-source-date=" + sourceDate);
+
return args;
}
diff --git a/make/data/publicsuffixlist/VERSION b/make/data/publicsuffixlist/VERSION
deleted file mode 100644
index 3367b24a0be6ffa9c2c00af11c65423987ebb9a0..0000000000000000000000000000000000000000
--- a/make/data/publicsuffixlist/VERSION
+++ /dev/null
@@ -1,2 +0,0 @@
-Github: https://raw.githubusercontent.com/publicsuffix/list/cbbba1d234670453df9c930dfbf510c0474d4301/public_suffix_list.dat
-Date: 2020-04-24
diff --git a/make/devkit/Tools.gmk b/make/devkit/Tools.gmk
index 19eccf89be2ac185f32d6dd7432671e22f80206a..e94a74d0063e1a254f7e92d8be720bcc7b0a5d92 100644
--- a/make/devkit/Tools.gmk
+++ b/make/devkit/Tools.gmk
@@ -87,8 +87,17 @@ endif
# Define external dependencies
# Latest that could be made to work.
-GCC_VER := 10.3.0
-ifeq ($(GCC_VER), 10.3.0)
+GCC_VER := 11.2.0
+ifeq ($(GCC_VER), 11.2.0)
+ gcc_ver := gcc-11.2.0
+ binutils_ver := binutils-2.37
+ ccache_ver := ccache-3.7.12
+ mpfr_ver := mpfr-4.1.0
+ gmp_ver := gmp-6.2.1
+ mpc_ver := mpc-1.2.1
+ gdb_ver := gdb-11.1
+ REQUIRED_MIN_MAKE_MAJOR_VERSION := 4
+else ifeq ($(GCC_VER), 10.3.0)
gcc_ver := gcc-10.3.0
binutils_ver := binutils-2.36.1
ccache_ver := ccache-3.7.11
diff --git a/make/hotspot/lib/CompileGtest.gmk b/make/hotspot/lib/CompileGtest.gmk
index cb2bbccc1686aa4a28a8f8557523eee75d0b80ca..f16b9a747bcc435320386cf394b0e183a8bac6d2 100644
--- a/make/hotspot/lib/CompileGtest.gmk
+++ b/make/hotspot/lib/CompileGtest.gmk
@@ -1,5 +1,5 @@
#
-# Copyright (c) 2016, 2021, Oracle and/or its affiliates. All rights reserved.
+# Copyright (c) 2016, 2022, Oracle and/or its affiliates. All rights reserved.
# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
#
# This code is free software; you can redistribute it and/or modify it
@@ -49,7 +49,7 @@ $(eval $(call SetupJdkLibrary, BUILD_GTEST_LIBGTEST, \
$(GTEST_FRAMEWORK_SRC)/googletest/src \
$(GTEST_FRAMEWORK_SRC)/googlemock/src, \
INCLUDE_FILES := gtest-all.cc gmock-all.cc, \
- DISABLED_WARNINGS_gcc := undef unused-result format-nonliteral, \
+ DISABLED_WARNINGS_gcc := undef unused-result format-nonliteral maybe-uninitialized, \
DISABLED_WARNINGS_clang := undef unused-result format-nonliteral, \
CFLAGS := $(JVM_CFLAGS) \
-I$(GTEST_FRAMEWORK_SRC)/googletest \
diff --git a/make/jdk/src/classes/build/tools/generatecharacter/CharacterScript.java b/make/jdk/src/classes/build/tools/generatecharacter/CharacterScript.java
index fda7a561e87f92f2ecb58b1ca625dbda618dfcc0..d242cb8ed42912d8bf01680a28db13b7a53562fb 100644
--- a/make/jdk/src/classes/build/tools/generatecharacter/CharacterScript.java
+++ b/make/jdk/src/classes/build/tools/generatecharacter/CharacterScript.java
@@ -115,7 +115,7 @@ public class CharacterScript {
for (j = 0; j < scriptSize; j++) {
for (int cp = scripts[j][0]; cp <= scripts[j][1]; cp++) {
- String name = names[scripts[j][2]].toUpperCase(Locale.ENGLISH);;
+ String name = names[scripts[j][2]].toUpperCase(Locale.ENGLISH);
if (cp > 0xffff)
System.out.printf("%05X %s%n", cp, name);
else
diff --git a/make/langtools/src/classes/build/tools/symbolgenerator/CreateSymbols.java b/make/langtools/src/classes/build/tools/symbolgenerator/CreateSymbols.java
index ea700f0b660c6558c21e1dfd91d195d4eaaf0988..41f600a817e0df7102fa856228814ba20689935a 100644
--- a/make/langtools/src/classes/build/tools/symbolgenerator/CreateSymbols.java
+++ b/make/langtools/src/classes/build/tools/symbolgenerator/CreateSymbols.java
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2006, 2021, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2006, 2022, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -156,7 +156,7 @@ import java.util.Optional;
* A tool for processing the .sym.txt files.
*
* To add historical data for JDK N, N >= 11, do the following:
- * * cd /make/data/symbols
+ * * cd /src/jdk.compiler/share/data/symbols
* * /bin/java --add-exports jdk.jdeps/com.sun.tools.classfile=ALL-UNNAMED \
* --add-exports jdk.compiler/com.sun.tools.javac.api=ALL-UNNAMED \
* --add-exports jdk.compiler/com.sun.tools.javac.jvm=ALL-UNNAMED \
@@ -164,7 +164,7 @@ import java.util.Optional;
* --add-modules jdk.jdeps \
* ../../../make/langtools/src/classes/build/tools/symbolgenerator/CreateSymbols.java \
* build-description-incremental symbols include.list
- * * sanity-check the new and updates files in make/data/symbols and commit them
+ * * sanity-check the new and updates files in src/jdk.compiler/share/data/symbols and commit them
*
* The tools allows to:
* * convert the .sym.txt into class/sig files for ct.sym
@@ -212,7 +212,8 @@ import java.util.Optional;
* To generate the .sym.txt files for OpenJDK 7 and 8:
* /bin/java build.tools.symbolgenerator.Probe OpenJDK7.classes
* /bin/java build.tools.symbolgenerator.Probe OpenJDK8.classes
- * java build.tools.symbolgenerator.CreateSymbols build-description make/data/symbols $TOPDIR make/data/symbols/include.list
+ * java build.tools.symbolgenerator.CreateSymbols build-description src/jdk.compiler/share/data/symbols
+ * $TOPDIR src/jdk.compiler/share/data/symbols/include.list
* 8 OpenJDK8.classes ''
* 7 OpenJDK7.classes 8
*
diff --git a/make/langtools/tools/genstubs/GenStubs.java b/make/langtools/tools/genstubs/GenStubs.java
index 9f8fc7a7a596132f2d8dc77246141ce62d10729a..bcf73fc5f71c89795b9b112ea72a5abdbd40c84f 100644
--- a/make/langtools/tools/genstubs/GenStubs.java
+++ b/make/langtools/tools/genstubs/GenStubs.java
@@ -213,7 +213,7 @@ public class GenStubs {
long prevClassMods = currClassMods;
currClassMods = tree.mods.flags;
try {
- super.visitClassDef(tree);;
+ super.visitClassDef(tree);
} finally {
currClassMods = prevClassMods;
}
diff --git a/make/modules/java.base/Copy.gmk b/make/modules/java.base/Copy.gmk
index d61a274317296b60c32e75458a9b921a8a78ef34..16d1b8d910cfe521a266bc23c9215c0cf2aca38f 100644
--- a/make/modules/java.base/Copy.gmk
+++ b/make/modules/java.base/Copy.gmk
@@ -246,6 +246,23 @@ ifeq ($(ENABLE_LIBFFI_BUNDLING), true)
TARGETS += $(COPY_LIBFFI)
endif
+################################################################################
+# Optionally copy hsdis into the the image
+
+ifeq ($(ENABLE_HSDIS_BUNDLING), true)
+ HSDIS_NAME := hsdis-$(OPENJDK_TARGET_CPU_LEGACY_LIB)$(SHARED_LIBRARY_SUFFIX)
+ HSDIS_PATH := $(SUPPORT_OUTPUTDIR)/hsdis/$(HSDIS_NAME)
+
+ $(eval $(call SetupCopyFiles, COPY_HSDIS, \
+ FILES := $(HSDIS_PATH), \
+ DEST := $(call FindLibDirForModule, $(MODULE)), \
+ FLATTEN := true, \
+ MACRO := install-file-nolink, \
+ ))
+
+ TARGETS += $(COPY_HSDIS)
+endif
+
################################################################################
# Generate classfile_constants.h
diff --git a/make/modules/java.base/Gendata.gmk b/make/modules/java.base/Gendata.gmk
index 4b894eeae4a6634b14f1491b323d03998856a77a..9e5cfe2d0fc40e10ff9c66e8e225ea154102a008 100644
--- a/make/modules/java.base/Gendata.gmk
+++ b/make/modules/java.base/Gendata.gmk
@@ -1,5 +1,5 @@
#
-# Copyright (c) 2011, 2021, Oracle and/or its affiliates. All rights reserved.
+# Copyright (c) 2011, 2022, Oracle and/or its affiliates. All rights reserved.
# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
#
# This code is free software; you can redistribute it and/or modify it
@@ -39,7 +39,7 @@ include gendata/GendataPublicSuffixList.gmk
GENDATA_UNINAME := $(JDK_OUTPUTDIR)/modules/java.base/java/lang/uniName.dat
-$(GENDATA_UNINAME): $(TOPDIR)/make/data/unicodedata/UnicodeData.txt $(BUILD_TOOLS_JDK)
+$(GENDATA_UNINAME): $(MODULE_SRC)/share/data/unicodedata/UnicodeData.txt $(BUILD_TOOLS_JDK)
$(call MakeDir, $(@D))
$(TOOL_CHARACTERNAME) $< $@
@@ -49,7 +49,7 @@ TARGETS += $(GENDATA_UNINAME)
GENDATA_CURDATA := $(JDK_OUTPUTDIR)/modules/java.base/java/util/currency.data
-$(GENDATA_CURDATA): $(TOPDIR)/make/data/currency/CurrencyData.properties $(BUILD_TOOLS_JDK)
+$(GENDATA_CURDATA): $(MODULE_SRC)/share/data/currency/CurrencyData.properties $(BUILD_TOOLS_JDK)
$(call MakeDir, $(@D))
$(RM) $@
$(TOOL_GENERATECURRENCYDATA) -o $@.tmp -i $<
@@ -63,7 +63,7 @@ TARGETS += $(GENDATA_CURDATA)
ifneq ($(CACERTS_SRC), )
GENDATA_CACERTS_SRC := $(CACERTS_SRC)
else
- GENDATA_CACERTS_SRC := $(TOPDIR)/make/data/cacerts/
+ GENDATA_CACERTS_SRC := $(MODULE_SRC)/share/data/cacerts/
endif
GENDATA_CACERTS := $(SUPPORT_OUTPUTDIR)/modules_libs/java.base/security/cacerts
@@ -78,7 +78,7 @@ endif
################################################################################
-GENDATA_JAVA_SECURITY_SRC := $(TOPDIR)/src/java.base/share/conf/security/java.security
+GENDATA_JAVA_SECURITY_SRC := $(MODULE_SRC)/share/conf/security/java.security
GENDATA_JAVA_SECURITY := $(SUPPORT_OUTPUTDIR)/modules_conf/java.base/security/java.security
ifeq ($(UNLIMITED_CRYPTO), true)
diff --git a/make/modules/java.base/Gensrc.gmk b/make/modules/java.base/Gensrc.gmk
index 9ea2d015d3bec2901cd6ca94a6ba77b629dc0158..9c9576bdd4a3290acc5518cd09a6e3abe0b00535 100644
--- a/make/modules/java.base/Gensrc.gmk
+++ b/make/modules/java.base/Gensrc.gmk
@@ -1,5 +1,5 @@
#
-# Copyright (c) 2011, 2020, Oracle and/or its affiliates. All rights reserved.
+# Copyright (c) 2011, 2022, Oracle and/or its affiliates. All rights reserved.
# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
#
# This code is free software; you can redistribute it and/or modify it
@@ -46,8 +46,8 @@ TARGETS += $(GENSRC_BASELOCALEDATA)
CLDR_DATA_DIR := $(TOPDIR)/make/data/cldr/common
GENSRC_DIR := $(SUPPORT_OUTPUTDIR)/gensrc/java.base
CLDR_GEN_DONE := $(GENSRC_DIR)/_cldr-gensrc.marker
-TZ_DATA_DIR := $(TOPDIR)/make/data/tzdata
-ZONENAME_TEMPLATE := $(TOPDIR)/src/java.base/share/classes/java/time/format/ZoneName.java.template
+TZ_DATA_DIR := $(MODULE_SRC)/share/data/tzdata
+ZONENAME_TEMPLATE := $(MODULE_SRC)/share/classes/java/time/format/ZoneName.java.template
$(CLDR_GEN_DONE): $(wildcard $(CLDR_DATA_DIR)/dtd/*.dtd) \
$(wildcard $(CLDR_DATA_DIR)/main/en*.xml) \
@@ -74,12 +74,12 @@ TARGETS += $(CLDR_GEN_DONE)
include GensrcProperties.gmk
$(eval $(call SetupCompileProperties, LIST_RESOURCE_BUNDLE, \
- SRC_DIRS := $(TOPDIR)/src/java.base/share/classes/sun/launcher/resources, \
+ SRC_DIRS := $(MODULE_SRC)/share/classes/sun/launcher/resources, \
CLASS := ListResourceBundle, \
))
$(eval $(call SetupCompileProperties, SUN_UTIL, \
- SRC_DIRS := $(TOPDIR)/src/java.base/share/classes/sun/util/resources, \
+ SRC_DIRS := $(MODULE_SRC)/share/classes/sun/util/resources, \
CLASS := sun.util.resources.LocaleNamesBundle, \
))
@@ -98,7 +98,7 @@ TARGETS += $(COPY_ZH_HK)
GENSRC_LSREQUIVMAPS := $(SUPPORT_OUTPUTDIR)/gensrc/java.base/sun/util/locale/LocaleEquivalentMaps.java
-$(GENSRC_LSREQUIVMAPS): $(TOPDIR)/make/data/lsrdata/language-subtag-registry.txt $(BUILD_TOOLS_JDK)
+$(GENSRC_LSREQUIVMAPS): $(MODULE_SRC)/share/data/lsrdata/language-subtag-registry.txt $(BUILD_TOOLS_JDK)
$(call MakeDir, $(@D))
$(TOOL_GENERATELSREQUIVMAPS) $< $@ $(COPYRIGHT_YEAR)
diff --git a/make/modules/java.base/gendata/GendataBlockedCerts.gmk b/make/modules/java.base/gendata/GendataBlockedCerts.gmk
index 65f75012a33d5648b3e6a3d1a7cc3b6612a3964d..b6149b457cd5093b9fcd1b73d9bedc2478d3f25c 100644
--- a/make/modules/java.base/gendata/GendataBlockedCerts.gmk
+++ b/make/modules/java.base/gendata/GendataBlockedCerts.gmk
@@ -1,5 +1,5 @@
#
-# Copyright (c) 2014, 2021, Oracle and/or its affiliates. All rights reserved.
+# Copyright (c) 2014, 2022, Oracle and/or its affiliates. All rights reserved.
# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
#
# This code is free software; you can redistribute it and/or modify it
@@ -23,7 +23,7 @@
# questions.
#
-GENDATA_BLOCKED_CERTS_SRC += $(TOPDIR)/make/data/blockedcertsconverter/blocked.certs.pem
+GENDATA_BLOCKED_CERTS_SRC += $(MODULE_SRC)/share/data/blockedcertsconverter/blocked.certs.pem
GENDATA_BLOCKED_CERTS := $(SUPPORT_OUTPUTDIR)/modules_libs/$(MODULE)/security/blocked.certs
$(GENDATA_BLOCKED_CERTS): $(BUILD_TOOLS_JDK) $(GENDATA_BLOCKED_CERTS_SRC)
diff --git a/make/modules/java.base/gendata/GendataBreakIterator.gmk b/make/modules/java.base/gendata/GendataBreakIterator.gmk
index d314253b4fe31bc1e3d849e53176375858e90f49..857ce2b7c34fc94fc82be091de65d211bb385740 100644
--- a/make/modules/java.base/gendata/GendataBreakIterator.gmk
+++ b/make/modules/java.base/gendata/GendataBreakIterator.gmk
@@ -1,5 +1,5 @@
-# Copyright (c) 2011, 2020, Oracle and/or its affiliates. All rights reserved.
+# Copyright (c) 2011, 2022, Oracle and/or its affiliates. All rights reserved.
# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
#
# This code is free software; you can redistribute it and/or modify it
@@ -74,7 +74,7 @@ BREAK_ITERATOR_BOOTCLASSPATH := \
# Generate data resource files.
# input
-UNICODEDATA := $(TOPDIR)/make/data/unicodedata/UnicodeData.txt
+UNICODEDATA := $(MODULE_SRC)/share/data/unicodedata/UnicodeData.txt
# output
BASE_DATA_PKG_DIR := $(JDK_OUTPUTDIR)/modules/java.base/sun/text/resources
diff --git a/make/modules/java.base/gendata/GendataPublicSuffixList.gmk b/make/modules/java.base/gendata/GendataPublicSuffixList.gmk
index 757098a619fc2c89d9c4dedc6219cb7f838dbd2f..189fccf0c0da535a2ceebfbe036f1331b7d0f0df 100644
--- a/make/modules/java.base/gendata/GendataPublicSuffixList.gmk
+++ b/make/modules/java.base/gendata/GendataPublicSuffixList.gmk
@@ -1,5 +1,5 @@
#
-# Copyright (c) 2017, 2019, Oracle and/or its affiliates. All rights reserved.
+# Copyright (c) 2017, 2022, Oracle and/or its affiliates. All rights reserved.
# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
#
# This code is free software; you can redistribute it and/or modify it
@@ -25,7 +25,7 @@
include $(SPEC)
-GENDATA_PUBLICSUFFIXLIST_SRC += $(TOPDIR)/make/data/publicsuffixlist/public_suffix_list.dat
+GENDATA_PUBLICSUFFIXLIST_SRC += $(MODULE_SRC)/share/data/publicsuffixlist/public_suffix_list.dat
GENDATA_PUBLICSUFFIXLIST := $(SUPPORT_OUTPUTDIR)/modules_libs/$(MODULE)/security/public_suffix_list.dat
$(GENDATA_PUBLICSUFFIXLIST): $(GENDATA_PUBLICSUFFIXLIST_SRC) $(BUILD_TOOLS_JDK)
diff --git a/make/modules/java.base/gendata/GendataTZDB.gmk b/make/modules/java.base/gendata/GendataTZDB.gmk
index 1352178694fe357c96bc4b9fa525560534ec3e54..593ed8a8f115879016afae95621de59080b0d705 100644
--- a/make/modules/java.base/gendata/GendataTZDB.gmk
+++ b/make/modules/java.base/gendata/GendataTZDB.gmk
@@ -1,5 +1,5 @@
#
-# Copyright (c) 2012, 2018, Oracle and/or its affiliates. All rights reserved.
+# Copyright (c) 2012, 2022, Oracle and/or its affiliates. All rights reserved.
# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
#
# This code is free software; you can redistribute it and/or modify it
@@ -28,7 +28,7 @@ GENDATA_TZDB :=
#
# Time zone data file creation
#
-TZDATA_DIR := $(TOPDIR)/make/data/tzdata
+TZDATA_DIR := $(MODULE_SRC)/share/data/tzdata
TZDATA_TZFILE := africa antarctica asia australasia europe northamerica southamerica backward etcetera gmt jdk11_backward
TZDATA_TZFILES := $(addprefix $(TZDATA_DIR)/,$(TZDATA_TZFILE))
diff --git a/make/modules/java.base/gensrc/GensrcBuffer.gmk b/make/modules/java.base/gensrc/GensrcBuffer.gmk
index 6ad432fb86678b4bc14bcc8fab711e47b2d23939..ce22230a8e18444ccd597555c71c4a3225f44f75 100644
--- a/make/modules/java.base/gensrc/GensrcBuffer.gmk
+++ b/make/modules/java.base/gensrc/GensrcBuffer.gmk
@@ -1,5 +1,5 @@
#
-# Copyright (c) 2011, 2020, Oracle and/or its affiliates. All rights reserved.
+# Copyright (c) 2011, 2022, Oracle and/or its affiliates. All rights reserved.
# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
#
# This code is free software; you can redistribute it and/or modify it
@@ -27,7 +27,7 @@ GENSRC_BUFFER :=
GENSRC_BUFFER_DST := $(SUPPORT_OUTPUTDIR)/gensrc/java.base/java/nio
-GENSRC_BUFFER_SRC := $(TOPDIR)/src/java.base/share/classes/java/nio
+GENSRC_BUFFER_SRC := $(MODULE_SRC)/share/classes/java/nio
###
diff --git a/make/modules/java.base/gensrc/GensrcCharacterData.gmk b/make/modules/java.base/gensrc/GensrcCharacterData.gmk
index eb9380165061cc750cb54d5faaffd3f34cc5052f..115a28309a2374a70e8ed4147dd259f27357e65d 100644
--- a/make/modules/java.base/gensrc/GensrcCharacterData.gmk
+++ b/make/modules/java.base/gensrc/GensrcCharacterData.gmk
@@ -1,5 +1,5 @@
#
-# Copyright (c) 2011, 2021, Oracle and/or its affiliates. All rights reserved.
+# Copyright (c) 2011, 2022, Oracle and/or its affiliates. All rights reserved.
# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
#
# This code is free software; you can redistribute it and/or modify it
@@ -29,8 +29,8 @@
GENSRC_CHARACTERDATA :=
-CHARACTERDATA = $(TOPDIR)/make/data/characterdata
-UNICODEDATA = $(TOPDIR)/make/data/unicodedata
+CHARACTERDATA_TEMPLATES = $(MODULE_SRC)/share/classes/java/lang
+UNICODEDATA = $(MODULE_SRC)/share/data/unicodedata
ifneq ($(DEBUG_LEVEL), release)
ifeq ($(ALLOW_ABSOLUTE_PATHS_IN_OUTPUT), true)
@@ -40,11 +40,11 @@ endif
define SetupCharacterData
$(SUPPORT_OUTPUTDIR)/gensrc/java.base/java/lang/$1.java: \
- $(CHARACTERDATA)/$1.java.template
+ $(CHARACTERDATA_TEMPLATES)/$1.java.template
$$(call LogInfo, Generating $1.java)
$$(call MakeDir, $$(@D))
$(TOOL_GENERATECHARACTER) $2 $(DEBUG_OPTION) \
- -template $(CHARACTERDATA)/$1.java.template \
+ -template $(CHARACTERDATA_TEMPLATES)/$1.java.template \
-spec $(UNICODEDATA)/UnicodeData.txt \
-specialcasing $(UNICODEDATA)/SpecialCasing.txt \
-proplist $(UNICODEDATA)/PropList.txt \
diff --git a/make/modules/java.base/gensrc/GensrcCharsetCoder.gmk b/make/modules/java.base/gensrc/GensrcCharsetCoder.gmk
index 79fa54b19cc0072af4840be05fbaae9b11257e18..2940ba4231931a797d68972fdd92daeda42c9993 100644
--- a/make/modules/java.base/gensrc/GensrcCharsetCoder.gmk
+++ b/make/modules/java.base/gensrc/GensrcCharsetCoder.gmk
@@ -1,5 +1,5 @@
#
-# Copyright (c) 2011, 2020, Oracle and/or its affiliates. All rights reserved.
+# Copyright (c) 2011, 2022, Oracle and/or its affiliates. All rights reserved.
# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
#
# This code is free software; you can redistribute it and/or modify it
@@ -27,7 +27,7 @@ GENSRC_CHARSETCODER :=
GENSRC_CHARSETCODER_DST := $(SUPPORT_OUTPUTDIR)/gensrc/java.base/java/nio/charset
-GENSRC_CHARSETCODER_SRC := $(TOPDIR)/src/java.base/share/classes/java/nio
+GENSRC_CHARSETCODER_SRC := $(MODULE_SRC)/share/classes/java/nio
GENSRC_CHARSETCODER_TEMPLATE := $(GENSRC_CHARSETCODER_SRC)/charset/Charset-X-Coder.java.template
diff --git a/make/modules/java.base/gensrc/GensrcEmojiData.gmk b/make/modules/java.base/gensrc/GensrcEmojiData.gmk
index d92cb9354a3fd3e4f6f0606df9d07728b144f27d..1af03bcafe92523c6fbd9d08953ccd247d1ebead 100644
--- a/make/modules/java.base/gensrc/GensrcEmojiData.gmk
+++ b/make/modules/java.base/gensrc/GensrcEmojiData.gmk
@@ -1,5 +1,5 @@
#
-# Copyright (c) 2019, 2020, Oracle and/or its affiliates. All rights reserved.
+# Copyright (c) 2019, 2022, Oracle and/or its affiliates. All rights reserved.
# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
#
# This code is free software; you can redistribute it and/or modify it
@@ -29,8 +29,8 @@
GENSRC_EMOJIDATA := $(SUPPORT_OUTPUTDIR)/gensrc/java.base/java/util/regex/EmojiData.java
-EMOJIDATATEMP = $(TOPDIR)/src/java.base/share/classes/java/util/regex/EmojiData.java.template
-UNICODEDATA = $(TOPDIR)/make/data/unicodedata
+EMOJIDATATEMP = $(MODULE_SRC)/share/classes/java/util/regex/EmojiData.java.template
+UNICODEDATA = $(MODULE_SRC)/share/data/unicodedata
$(GENSRC_EMOJIDATA): $(BUILD_TOOLS_JDK) $(EMOJIDATATEMP) $(UNICODEDATA)/emoji/emoji-data.txt
$(call LogInfo, Generating $@)
diff --git a/make/modules/java.base/gensrc/GensrcExceptions.gmk b/make/modules/java.base/gensrc/GensrcExceptions.gmk
index 37fed896560b7217e2c45e9a4da5598244d3959c..1c4974b4a28c6f01ad8f9d8713895b99f9e43682 100644
--- a/make/modules/java.base/gensrc/GensrcExceptions.gmk
+++ b/make/modules/java.base/gensrc/GensrcExceptions.gmk
@@ -1,5 +1,5 @@
#
-# Copyright (c) 2011, 2020, Oracle and/or its affiliates. All rights reserved.
+# Copyright (c) 2011, 2022, Oracle and/or its affiliates. All rights reserved.
# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
#
# This code is free software; you can redistribute it and/or modify it
@@ -27,7 +27,7 @@ GENSRC_EXCEPTIONS :=
GENSRC_EXCEPTIONS_DST := $(SUPPORT_OUTPUTDIR)/gensrc/java.base/java/nio
-GENSRC_EXCEPTIONS_SRC := $(TOPDIR)/src/java.base/share/classes/java/nio
+GENSRC_EXCEPTIONS_SRC := $(MODULE_SRC)/share/classes/java/nio
GENSRC_EXCEPTIONS_CMD := $(TOPDIR)/make/scripts/genExceptions.sh
GENSRC_EXCEPTIONS_SRC_DIRS := . charset channels
diff --git a/make/modules/java.base/gensrc/GensrcLocaleData.gmk b/make/modules/java.base/gensrc/GensrcLocaleData.gmk
index 1e28d91ab684eb9941da504b0915e997b7f69800..c04bab5317570eb830f44316a61ac6cdd1c58ae7 100644
--- a/make/modules/java.base/gensrc/GensrcLocaleData.gmk
+++ b/make/modules/java.base/gensrc/GensrcLocaleData.gmk
@@ -1,5 +1,5 @@
#
-# Copyright (c) 2011, 2020, Oracle and/or its affiliates. All rights reserved.
+# Copyright (c) 2011, 2022, Oracle and/or its affiliates. All rights reserved.
# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
#
# This code is free software; you can redistribute it and/or modify it
@@ -29,8 +29,8 @@
# First go look for all locale files
LOCALE_FILES := $(call FindFiles, \
- $(TOPDIR)/src/$(MODULE)/share/classes/sun/text/resources \
- $(TOPDIR)/src/$(MODULE)/share/classes/sun/util/resources, \
+ $(MODULE_SRC)/share/classes/sun/text/resources \
+ $(MODULE_SRC)/share/classes/sun/util/resources, \
FormatData_*.java FormatData_*.properties \
CollationData_*.java CollationData_*.properties \
TimeZoneNames_*.java TimeZoneNames_*.properties \
diff --git a/make/modules/java.base/gensrc/GensrcScopedMemoryAccess.gmk b/make/modules/java.base/gensrc/GensrcScopedMemoryAccess.gmk
index b431acc14e1183e577b15200cb393d04d6d9cc13..54fea77571e90209b635811a9348c5732c900d25 100644
--- a/make/modules/java.base/gensrc/GensrcScopedMemoryAccess.gmk
+++ b/make/modules/java.base/gensrc/GensrcScopedMemoryAccess.gmk
@@ -24,7 +24,7 @@
#
SCOPED_MEMORY_ACCESS_GENSRC_DIR := $(SUPPORT_OUTPUTDIR)/gensrc/java.base/jdk/internal/misc
-SCOPED_MEMORY_ACCESS_SRC_DIR := $(TOPDIR)/src/java.base/share/classes/jdk/internal/misc
+SCOPED_MEMORY_ACCESS_SRC_DIR := $(MODULE_SRC)/share/classes/jdk/internal/misc
SCOPED_MEMORY_ACCESS_TEMPLATE := $(SCOPED_MEMORY_ACCESS_SRC_DIR)/X-ScopedMemoryAccess.java.template
SCOPED_MEMORY_ACCESS_BIN_TEMPLATE := $(SCOPED_MEMORY_ACCESS_SRC_DIR)/X-ScopedMemoryAccess-bin.java.template
SCOPED_MEMORY_ACCESS_DEST := $(SCOPED_MEMORY_ACCESS_GENSRC_DIR)/ScopedMemoryAccess.java
@@ -139,7 +139,7 @@ endef
SCOPE_MEMORY_ACCESS_TYPES := Byte Short Char Int Long Float Double
$(foreach t, $(SCOPE_MEMORY_ACCESS_TYPES), \
$(eval $(call GenerateScopedOp,BIN_$t,$t)))
-
+
$(SCOPED_MEMORY_ACCESS_DEST): $(BUILD_TOOLS_JDK) $(SCOPED_MEMORY_ACCESS_TEMPLATE) $(SCOPED_MEMORY_ACCESS_BIN_TEMPLATE)
$(call MakeDir, $(SCOPED_MEMORY_ACCESS_GENSRC_DIR))
$(CAT) $(SCOPED_MEMORY_ACCESS_TEMPLATE) > $(SCOPED_MEMORY_ACCESS_DEST)
@@ -147,5 +147,5 @@ $(SCOPED_MEMORY_ACCESS_DEST): $(BUILD_TOOLS_JDK) $(SCOPED_MEMORY_ACCESS_TEMPLATE
$(TOOL_SPP) -nel -K$(BIN_$t_type) -Dtype=$(BIN_$t_type) -DType=$(BIN_$t_Type) $(BIN_$t_ARGS) \
-i$(SCOPED_MEMORY_ACCESS_BIN_TEMPLATE) -o$(SCOPED_MEMORY_ACCESS_DEST) ;)
$(PRINTF) "}\n" >> $(SCOPED_MEMORY_ACCESS_DEST)
-
+
TARGETS += $(SCOPED_MEMORY_ACCESS_DEST)
diff --git a/make/modules/java.base/gensrc/GensrcVarHandles.gmk b/make/modules/java.base/gensrc/GensrcVarHandles.gmk
index 579488379c333d853223de5c5a949d30dc55f3b2..e1686834bf5e908dd8d64676c23408ede400b9a1 100644
--- a/make/modules/java.base/gensrc/GensrcVarHandles.gmk
+++ b/make/modules/java.base/gensrc/GensrcVarHandles.gmk
@@ -1,5 +1,5 @@
#
-# Copyright (c) 2015, 2020, Oracle and/or its affiliates. All rights reserved.
+# Copyright (c) 2015, 2022, Oracle and/or its affiliates. All rights reserved.
# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
#
# This code is free software; you can redistribute it and/or modify it
@@ -26,7 +26,7 @@
GENSRC_VARHANDLES :=
VARHANDLES_GENSRC_DIR := $(SUPPORT_OUTPUTDIR)/gensrc/java.base/java/lang/invoke
-VARHANDLES_SRC_DIR := $(TOPDIR)/src/java.base/share/classes/java/lang/invoke
+VARHANDLES_SRC_DIR := $(MODULE_SRC)/share/classes/java/lang/invoke
################################################################################
# Setup a rule for generating a VarHandle java class
diff --git a/make/modules/java.desktop/gendata/GendataFontConfig.gmk b/make/modules/java.desktop/gendata/GendataFontConfig.gmk
index 42e3f4b485f14c3c6ca126825fc9748ecf5a32bc..92a64b986e189c4c6a6c98a841607989814a0769 100644
--- a/make/modules/java.desktop/gendata/GendataFontConfig.gmk
+++ b/make/modules/java.desktop/gendata/GendataFontConfig.gmk
@@ -1,5 +1,5 @@
#
-# Copyright (c) 2011, 2018, Oracle and/or its affiliates. All rights reserved.
+# Copyright (c) 2011, 2022, Oracle and/or its affiliates. All rights reserved.
# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
#
# This code is free software; you can redistribute it and/or modify it
@@ -23,30 +23,35 @@
# questions.
#
-GENDATA_FONT_CONFIG_DST := $(SUPPORT_OUTPUTDIR)/modules_libs/$(MODULE)
+FONTCONFIG_DATA_DIR := $(MODULE_SRC)/$(OPENJDK_TARGET_OS)/data/fontconfig
+FONTCONFIG_SRC_FILE := $(FONTCONFIG_DATA_DIR)/fontconfig.properties
-GENDATA_FONT_CONFIG_DATA_DIR ?= $(TOPDIR)/make/data/fontconfig
+FONTCONFIG_DEST_DIR := $(SUPPORT_OUTPUTDIR)/modules_libs/$(MODULE)
+FONTCONFIG_OUT_FILE := $(FONTCONFIG_DEST_DIR)/fontconfig.properties.src
+FONTCONFIG_OUT_BIN_FILE := $(FONTCONFIG_DEST_DIR)/fontconfig.bfc
-GENDATA_FONT_CONFIG_SRC_FILES := \
- $(wildcard $(GENDATA_FONT_CONFIG_DATA_DIR)/$(OPENJDK_TARGET_OS).*)
+ifneq ($(findstring $(LOG_LEVEL), debug trace), )
+ FONTCONFIG_VERBOSE_FLAG := -verbose
+endif
+# Not all OSes have a fontconfig file
+ifneq ($(wildcard $(FONTCONFIG_SRC_FILE)), )
-$(GENDATA_FONT_CONFIG_DST)/%.src: \
- $(GENDATA_FONT_CONFIG_DATA_DIR)/$(OPENJDK_TARGET_OS).%
+ # Copy properties file as-is
+ $(FONTCONFIG_OUT_FILE): $(FONTCONFIG_SRC_FILE)
+ $(call LogInfo, Copying fontconfig.properties)
$(call install-file)
-$(GENDATA_FONT_CONFIG_DST)/%.bfc: \
- $(GENDATA_FONT_CONFIG_DATA_DIR)/$(OPENJDK_TARGET_OS).%.properties \
- $(BUILD_TOOLS_JDK)
+ TARGETS += $(FONTCONFIG_OUT_FILE)
+
+ # Generate binary representation
+ $(FONTCONFIG_OUT_BIN_FILE): $(FONTCONFIG_SRC_FILE) $(BUILD_TOOLS_JDK)
+ $(call LogInfo, Compiling fontconfig.properties to binary)
$(call MakeTargetDir)
$(RM) $@
- $(TOOL_COMPILEFONTCONFIG) $< $@
+ $(TOOL_COMPILEFONTCONFIG) $(FONTCONFIG_VERBOSE_FLAG) $< $@
$(CHMOD) 444 $@
+ TARGETS += $(FONTCONFIG_OUT_BIN_FILE)
-GENDATA_FONT_CONFIGS := $(patsubst $(GENDATA_FONT_CONFIG_DATA_DIR)/$(OPENJDK_TARGET_OS).%, \
- $(GENDATA_FONT_CONFIG_DST)/%.src, $(GENDATA_FONT_CONFIG_SRC_FILES))
-GENDATA_BFONT_CONFIGS := $(patsubst $(GENDATA_FONT_CONFIG_DATA_DIR)/$(OPENJDK_TARGET_OS).%.properties, \
- $(GENDATA_FONT_CONFIG_DST)/%.bfc, $(GENDATA_FONT_CONFIG_SRC_FILES))
-
-TARGETS := $(GENDATA_FONT_CONFIGS) $(GENDATA_BFONT_CONFIGS)
+endif
diff --git a/make/modules/java.desktop/gensrc/GensrcIcons.gmk b/make/modules/java.desktop/gensrc/GensrcIcons.gmk
index e0a6c107eccfe2dc1b1f7bafe64431e7b117590f..28434d3f4c1be48a8f541e01fabb472c20164a54 100644
--- a/make/modules/java.desktop/gensrc/GensrcIcons.gmk
+++ b/make/modules/java.desktop/gensrc/GensrcIcons.gmk
@@ -1,5 +1,5 @@
#
-# Copyright (c) 2011, 2020, Oracle and/or its affiliates. All rights reserved.
+# Copyright (c) 2011, 2022, Oracle and/or its affiliates. All rights reserved.
# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
#
# This code is free software; you can redistribute it and/or modify it
@@ -29,7 +29,7 @@ GENSRC_AWT_ICONS_TMP := $(SUPPORT_OUTPUTDIR)/gensrc/java.desktop
GENSRC_AWT_ICONS_DST := $(GENSRC_AWT_ICONS_TMP)/sun/awt/
# Allow this to be overridden from a custom makefile
-X11_ICONS_PATH_PREFIX ?= $(TOPDIR)/src/java.desktop/$(OPENJDK_TARGET_OS_TYPE)
+X11_ICONS_PATH_PREFIX ?= $(MODULE_SRC)/$(OPENJDK_TARGET_OS_TYPE)
GENSRC_AWT_ICONS_SRC += \
$(X11_ICONS_PATH_PREFIX)/classes/sun/awt/X11/java-icon16.png \
@@ -38,7 +38,7 @@ GENSRC_AWT_ICONS_SRC += \
$(X11_ICONS_PATH_PREFIX)/classes/sun/awt/X11/java-icon48.png
-AWT_ICONPATH := $(TOPDIR)/src/java.desktop/share/classes/sun/awt/resources
+AWT_ICONPATH := $(MODULE_SRC)/share/classes/sun/awt/resources
GENSRC_AWT_ICONS_SRC += \
$(AWT_ICONPATH)/security-icon-bw16.png \
@@ -111,7 +111,7 @@ ifeq ($(call isTargetOs, macosx), true)
GENSRC_OSX_ICONS_DST := $(SUPPORT_OUTPUTDIR)/headers/java.desktop
GENSRC_OSX_ICONS := $(GENSRC_OSX_ICONS_DST)/AWTIconData.h
- GENSRC_OSX_ICONS_SRC ?= $(TOPDIR)/make/data/macosxicons/JavaApp.icns
+ GENSRC_OSX_ICONS_SRC ?= $(MODULE_SRC)/macosx/data/macosxicons/JavaApp.icns
$(GENSRC_OSX_ICONS): $(GENSRC_OSX_ICONS_SRC) $(BUILD_TOOLS_JDK)
diff --git a/make/modules/java.desktop/gensrc/GensrcSwing.gmk b/make/modules/java.desktop/gensrc/GensrcSwing.gmk
index cfb50831d1bcc7d7464bb3ec2f31a702c85a16db..abd428f3641987eb687ab7d46d87402663c34228 100644
--- a/make/modules/java.desktop/gensrc/GensrcSwing.gmk
+++ b/make/modules/java.desktop/gensrc/GensrcSwing.gmk
@@ -1,5 +1,5 @@
#
-# Copyright (c) 2011, 2020, Oracle and/or its affiliates. All rights reserved.
+# Copyright (c) 2011, 2022, Oracle and/or its affiliates. All rights reserved.
# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
#
# This code is free software; you can redistribute it and/or modify it
@@ -28,7 +28,7 @@
#
NIMBUS_PACKAGE = javax.swing.plaf
NIMBUS_GENSRC_DIR = $(SUPPORT_OUTPUTDIR)/gensrc/java.desktop/javax/swing/plaf/nimbus
-NIMBUS_SKIN_FILE = $(TOPDIR)/src/java.desktop/share/classes/javax/swing/plaf/nimbus/skin.laf
+NIMBUS_SKIN_FILE = $(MODULE_SRC)/share/classes/javax/swing/plaf/nimbus/skin.laf
$(SUPPORT_OUTPUTDIR)/gensrc/java.desktop/_the.generated_nimbus: $(NIMBUS_SKIN_FILE) $(BUILD_TOOLS_JDK)
$(call LogInfo, Generating Nimbus source files)
diff --git a/make/modules/java.desktop/gensrc/GensrcX11Wrappers.gmk b/make/modules/java.desktop/gensrc/GensrcX11Wrappers.gmk
index d46328f8607984f61186a67bce64e0c89ae41ea1..25402ad035a5b6d0f4729bf360c30f62bc04f58a 100644
--- a/make/modules/java.desktop/gensrc/GensrcX11Wrappers.gmk
+++ b/make/modules/java.desktop/gensrc/GensrcX11Wrappers.gmk
@@ -1,5 +1,5 @@
#
-# Copyright (c) 2012, 2020, Oracle and/or its affiliates. All rights reserved.
+# Copyright (c) 2012, 2022, Oracle and/or its affiliates. All rights reserved.
# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
#
# This code is free software; you can redistribute it and/or modify it
@@ -24,13 +24,13 @@
#
# Generate java sources using the X11 offsets that are precalculated in files
-# make/data/x11wrappergen/sizes-.txt.
+# src/java.desktop/unix/data/x11wrappergen/sizes-.txt.
# Put the generated Java classes used to interface X11 from awt here.
GENSRC_X11WRAPPERS_OUTPUTDIR := $(SUPPORT_OUTPUTDIR)/gensrc/java.desktop/sun/awt/X11
# The pre-calculated offset file are stored here:
-GENSRC_X11WRAPPERS_DATADIR := $(TOPDIR)/make/data/x11wrappergen
+GENSRC_X11WRAPPERS_DATADIR := $(MODULE_SRC)/unix/data/x11wrappergen
GENSRC_X11WRAPPERS_DATA := $(GENSRC_X11WRAPPERS_DATADIR)/sizes-$(OPENJDK_TARGET_CPU_BITS).txt
# Run the tool on the offset files to generate several Java classes used in awt.
diff --git a/make/modules/java.desktop/lib/Awt2dLibraries.gmk b/make/modules/java.desktop/lib/Awt2dLibraries.gmk
index a0c4082554626e942d200e73373a734eb99f8e3e..3cf8ca8a820e8439ab1582b3d2d01a1ea69442b9 100644
--- a/make/modules/java.desktop/lib/Awt2dLibraries.gmk
+++ b/make/modules/java.desktop/lib/Awt2dLibraries.gmk
@@ -742,7 +742,7 @@ ifeq ($(ENABLE_HEADLESS_ONLY), false)
maybe-uninitialized shift-negative-value implicit-fallthrough \
unused-function, \
DISABLED_WARNINGS_clang := incompatible-pointer-types sign-compare \
- deprecated-declarations, \
+ deprecated-declarations null-pointer-subtraction, \
DISABLED_WARNINGS_microsoft := 4018 4244 4267, \
LDFLAGS := $(LDFLAGS_JDKLIB) \
$(call SET_SHARED_LIBRARY_ORIGIN), \
diff --git a/make/modules/jdk.charsets/Gensrc.gmk b/make/modules/jdk.charsets/Gensrc.gmk
index ca9c19409411ee21d325c89ac588567ea2b2a690..1fac37b2c4b99fa1268cc96241b4f7dc6fb5ee2c 100644
--- a/make/modules/jdk.charsets/Gensrc.gmk
+++ b/make/modules/jdk.charsets/Gensrc.gmk
@@ -1,5 +1,5 @@
#
-# Copyright (c) 2011, 2020, Oracle and/or its affiliates. All rights reserved.
+# Copyright (c) 2011, 2022, Oracle and/or its affiliates. All rights reserved.
# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
#
# This code is free software; you can redistribute it and/or modify it
@@ -38,8 +38,8 @@ CHARSET_TEMPLATES := \
$(CHARSET_DATA_DIR)/SingleByte-X.java.template \
$(CHARSET_DATA_DIR)/DoubleByte-X.java.template
CHARSET_EXTENDED_JAVA_TEMPLATES := \
- $(TOPDIR)/src/jdk.charsets/share/classes/sun/nio/cs/ext/ExtendedCharsets.java.template
-CHARSET_EXTENDED_JAVA_DIR := $(TOPDIR)/src/jdk.charsets/share/classes/sun/nio/cs/ext
+ $(MODULE_SRC)/share/classes/sun/nio/cs/ext/ExtendedCharsets.java.template
+CHARSET_EXTENDED_JAVA_DIR := $(MODULE_SRC)/share/classes/sun/nio/cs/ext
CHARSET_STANDARD_OS := stdcs-$(OPENJDK_TARGET_OS)
$(CHARSET_DONE_CS)-extcs: $(CHARSET_DATA_DIR)/charsets \
diff --git a/make/modules/jdk.compiler/Gendata.gmk b/make/modules/jdk.compiler/Gendata.gmk
index 85815e5524b1edeb7423a5199a328fa053e70c49..5471fa1127c1c664aeac5613fc4f3a65a2c83e80 100644
--- a/make/modules/jdk.compiler/Gendata.gmk
+++ b/make/modules/jdk.compiler/Gendata.gmk
@@ -1,5 +1,5 @@
#
-# Copyright (c) 2015, 2021, Oracle and/or its affiliates. All rights reserved.
+# Copyright (c) 2015, 2022, Oracle and/or its affiliates. All rights reserved.
# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
#
# This code is free software; you can redistribute it and/or modify it
@@ -41,7 +41,7 @@ CT_MODULES := $(DOCS_MODULES)
# Get the complete module source path:
CT_MODULESOURCEPATH := $(call GetModuleSrcPath)
-CT_DATA_DESCRIPTION += $(TOPDIR)/make/data/symbols/symbols
+CT_DATA_DESCRIPTION += $(MODULE_SRC)/share/data/symbols/symbols
COMPILECREATESYMBOLS_ADD_EXPORTS := \
--add-exports java.base/jdk.internal.javac=java.compiler.interim,jdk.compiler.interim \
@@ -65,7 +65,7 @@ $(eval $(call SetupJavaCompilation, COMPILE_CREATE_SYMBOLS, \
$(SUPPORT_OUTPUTDIR)/symbols/ct.sym: \
$(COMPILE_CREATE_SYMBOLS) \
- $(wildcard $(TOPDIR)/make/data/symbols/*) \
+ $(wildcard $(MODULE_SRC)/share/data/symbols/*) \
$(MODULE_INFOS)
$(RM) -r $(@D)
$(MKDIR) -p $(@D)
diff --git a/make/modules/jdk.javadoc/Gendata.gmk b/make/modules/jdk.javadoc/Gendata.gmk
index 50ef87545a4cdf360d52e3de65973777dac21bab..69c93c29468b890b3888f25c65fd75e3a6277fa7 100644
--- a/make/modules/jdk.javadoc/Gendata.gmk
+++ b/make/modules/jdk.javadoc/Gendata.gmk
@@ -1,5 +1,5 @@
#
-# Copyright (c) 2015, 2021, Oracle and/or its affiliates. All rights reserved.
+# Copyright (c) 2015, 2022, Oracle and/or its affiliates. All rights reserved.
# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
#
# This code is free software; you can redistribute it and/or modify it
@@ -39,7 +39,7 @@ JAVADOC_MODULES := $(DOCS_MODULES)
# Get the complete module source path:
JAVADOC_MODULESOURCEPATH := $(call GetModuleSrcPath)
-CT_DATA_DESCRIPTION += $(TOPDIR)/make/data/symbols/symbols
+CT_DATA_DESCRIPTION += $(TOPDIR)/src/jdk.compiler/share/data/symbols/symbols
COMPILECREATESYMBOLS_ADD_EXPORTS := \
--add-exports java.base/jdk.internal=java.compiler.interim,jdk.compiler.interim \
@@ -68,7 +68,7 @@ ELEMENT_LISTS_DIR := $(JDK_JAVADOC_DIR)/$(ELEMENT_LISTS_PKG)
$(JDK_JAVADOC_DIR)/_element_lists.marker: \
$(COMPILE_CREATE_SYMBOLS) \
- $(wildcard $(TOPDIR)/make/data/symbols/*) \
+ $(wildcard $(TOPDIR)/src/jdk.compiler/share/data/symbols/*) \
$(MODULE_INFOS)
$(call MakeTargetDir)
$(call LogInfo, Creating javadoc element lists)
diff --git a/make/modules/jdk.jdi/Gensrc.gmk b/make/modules/jdk.jdi/Gensrc.gmk
index 5487e950921ea59c36051566daa702791c9174c7..7db06b5c95873a3e0b5258124b82dcc095286bff 100644
--- a/make/modules/jdk.jdi/Gensrc.gmk
+++ b/make/modules/jdk.jdi/Gensrc.gmk
@@ -1,5 +1,5 @@
#
-# Copyright (c) 2011, 2020, Oracle and/or its affiliates. All rights reserved.
+# Copyright (c) 2011, 2022, Oracle and/or its affiliates. All rights reserved.
# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
#
# This code is free software; you can redistribute it and/or modify it
@@ -26,10 +26,11 @@
include GensrcCommonJdk.gmk
################################################################################
-# Translate the Java debugger wire protocol (jdwp.spec) file into a JDWP.java file
-# and a JDWPCommands.h C-header file.
+# Translate the Java debugger wire protocol (jdwp.spec) file into a front-end
+# Java implementation (JDWP.java), a back-end C header file (JDWPCommands.h) and
+# an HTML documentation file (jdwp-protocol.html).
-JDWP_SPEC_FILE := $(TOPDIR)/make/data/jdwp/jdwp.spec
+JDWP_SPEC_FILE := $(TOPDIR)/src/java.se/share/data/jdwp/jdwp.spec
HEADER_FILE := $(SUPPORT_OUTPUTDIR)/headers/jdk.jdwp.agent/JDWPCommands.h
JAVA_FILE := $(SUPPORT_OUTPUTDIR)/gensrc/jdk.jdi/com/sun/tools/jdi/JDWP.java
HTML_FILE := $(SUPPORT_OUTPUTDIR)/gensrc/jdk.jdi/jdwp-protocol.html
diff --git a/make/modules/jdk.localedata/Gensrc.gmk b/make/modules/jdk.localedata/Gensrc.gmk
index 09f014e8607c6c6244cae52a91f16f2c9fa37fcf..233572c8a544bde9338cd01146636658dd0b6bb6 100644
--- a/make/modules/jdk.localedata/Gensrc.gmk
+++ b/make/modules/jdk.localedata/Gensrc.gmk
@@ -1,5 +1,5 @@
#
-# Copyright (c) 2014, 2020, Oracle and/or its affiliates. All rights reserved.
+# Copyright (c) 2014, 2022, Oracle and/or its affiliates. All rights reserved.
# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
#
# This code is free software; you can redistribute it and/or modify it
@@ -57,7 +57,7 @@ TARGETS += $(CLDR_GEN_DONE)
include GensrcProperties.gmk
$(eval $(call SetupCompileProperties, COMPILE_PROPERTIES, \
- SRC_DIRS := $(TOPDIR)/src/jdk.localedata/share/classes/sun/util/resources, \
+ SRC_DIRS := $(MODULE_SRC)/share/classes/sun/util/resources, \
CLASS := sun.util.resources.LocaleNamesBundle, \
KEEP_ALL_TRANSLATIONS := true, \
))
diff --git a/make/scripts/compare.sh b/make/scripts/compare.sh
index cc05476c997e5090c860882fe72e241b1bf89531..a0006fa4ceee104ccc9948cf7bd3a27a45993615 100644
--- a/make/scripts/compare.sh
+++ b/make/scripts/compare.sh
@@ -1,6 +1,6 @@
#!/bin/bash
#
-# Copyright (c) 2012, 2021, Oracle and/or its affiliates. All rights reserved.
+# Copyright (c) 2012, 2022, Oracle and/or its affiliates. All rights reserved.
# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
#
# This code is free software; you can redistribute it and/or modify it
@@ -324,7 +324,7 @@ compare_general_files() {
! -name "*.cpl" ! -name "*.pdb" ! -name "*.exp" ! -name "*.ilk" \
! -name "*.lib" ! -name "*.jmod" ! -name "*.exe" \
! -name "*.obj" ! -name "*.o" ! -name "jspawnhelper" ! -name "*.a" \
- ! -name "*.tar.gz" ! -name "*.jsa" ! -name "gtestLauncher" \
+ ! -name "*.tar.gz" ! -name "classes_nocoops.jsa" ! -name "gtestLauncher" \
! -name "*.map" \
| $GREP -v "./bin/" | $SORT | $FILTER)
diff --git a/make/scripts/generate-symbol-data.sh b/make/scripts/generate-symbol-data.sh
index 56aa8016dd6a8a55730682e0484eb034a1cb51fa..ee1d540715fd3905021269dcc6c4ae48cc57e1d8 100644
--- a/make/scripts/generate-symbol-data.sh
+++ b/make/scripts/generate-symbol-data.sh
@@ -34,19 +34,19 @@
# - have a checkout the JDK to which the data should be added (or in which the data should be updated).
# The checkout directory will be denoted as "${JDK_CHECKOUT}" in the further text.
# The checkout must not have any local changes that could interfere with the new data. In particular,
-# there must be absolutely no changed, new or removed files under the ${JDK_CHECKOUT}/make/data/symbols
+# there must be absolutely no changed, new or removed files under the ${JDK_CHECKOUT}/src/jdk.compiler/share/data/symbols
# directory.
# - open a terminal program and run these commands:
-# cd "${JDK_CHECKOUT}"/make/data/symbols
+# cd "${JDK_CHECKOUT}"/src/jdk.compiler/share/data/symbols
# bash ../../scripts/generate-symbol-data.sh "${JDK_N_INSTALL}"
-# - this command will generate or update data for "--release N" into the ${JDK_CHECKOUT}/make/data/symbols
+# - this command will generate or update data for "--release N" into the ${JDK_CHECKOUT}/src/jdk.compiler/share/data/symbols
# directory, updating all registration necessary. If the goal was to update the data, and there are no
-# new or changed files in the ${JDK_CHECKOUT}/make/data/symbols directory after running this script,
+# new or changed files in the ${JDK_CHECKOUT}/src/jdk.compiler/share/data/symbols directory after running this script,
# there were no relevant changes and no further action is necessary. Note that version for N > 9 are encoded
# using capital letters, i.e. A represents version 10, B represents 11, and so on. The version numbers are in
-# the names of the files in the ${JDK_CHECKOUT}/make/data/symbols directory, as well as in
-# the ${JDK_CHECKOUT}/make/data/symbols/symbols file.
-# - if there are any changed/new files in the ${JDK_CHECKOUT}/make/data/symbols directory after running this script,
+# the names of the files in the ${JDK_CHECKOUT}/src/jdk.compiler/share/data/symbols directory, as well as in
+# the ${JDK_CHECKOUT}/src/jdk.compiler/share/data/symbols/symbols file.
+# - if there are any changed/new files in the ${JDK_CHECKOUT}/src/jdk.compiler/share/data/symbols directory after running this script,
# then all the changes in this directory, including any new files, need to be sent for review and eventually pushed.
# The commit message should specify which binary build was installed in the ${JDK_N_INSTALL} directory and also
# include the SCM state that was used to build it, which can be found in ${JDK_N_INSTALL}/release,
@@ -59,12 +59,12 @@ if [ "$1x" = "x" ] ; then
fi;
if [ ! -f symbols ] ; then
- echo "Must run inside the make/data/symbols directory" >&2
+ echo "Must run inside the src/jdk.compiler/share/data/symbols directory" >&2
exit 1
fi;
if [ "`git status --porcelain=v1 .`x" != "x" ] ; then
- echo "The make/data/symbols directory contains local changes!" >&2
+ echo "The src/jdk.compiler/share/data/symbols directory contains local changes!" >&2
exit 1
fi;
diff --git a/make/test/JtregNativeJdk.gmk b/make/test/JtregNativeJdk.gmk
index 6797952bc05d3e8facec0eaeb98f344527d984c0..89cc94f90bf02e9b4eb8bd8fc1700253fd2115fb 100644
--- a/make/test/JtregNativeJdk.gmk
+++ b/make/test/JtregNativeJdk.gmk
@@ -64,6 +64,7 @@ ifeq ($(call isTargetOs, windows), true)
BUILD_JDK_JTREG_EXECUTABLES_LIBS_exeCallerAccessTest := jvm.lib
BUILD_JDK_JTREG_EXECUTABLES_LIBS_exeNullCallerClassLoaderTest := jvm.lib
BUILD_JDK_JTREG_EXECUTABLES_LIBS_exeNullCallerLookupTest := jvm.lib
+ BUILD_JDK_JTREG_EXECUTABLES_LIBS_exeNullCallerResourceBundle := jvm.lib
BUILD_JDK_JTREG_EXECUTABLES_LIBS_exerevokeall := advapi32.lib
BUILD_JDK_JTREG_LIBRARIES_CFLAGS_libAsyncStackWalk := /EHsc
BUILD_JDK_JTREG_LIBRARIES_CFLAGS_libAsyncInvokers := /EHsc
@@ -84,6 +85,7 @@ else
BUILD_JDK_JTREG_EXECUTABLES_LIBS_exeCallerAccessTest := -ljvm
BUILD_JDK_JTREG_EXECUTABLES_LIBS_exeNullCallerClassLoaderTest := -ljvm
BUILD_JDK_JTREG_EXECUTABLES_LIBS_exeNullCallerLookupTest := -ljvm
+ BUILD_JDK_JTREG_EXECUTABLES_LIBS_exeNullCallerResourceBundle := -ljvm
endif
ifeq ($(call isTargetOs, macosx), true)
diff --git a/src/hotspot/cpu/aarch64/aarch64.ad b/src/hotspot/cpu/aarch64/aarch64.ad
index 5b4ad50b2a5d22ddb9401050a8236a76141c1794..68fd336aa33b6f6ce3c685279ae5cd8e110bc777 100644
--- a/src/hotspot/cpu/aarch64/aarch64.ad
+++ b/src/hotspot/cpu/aarch64/aarch64.ad
@@ -1311,6 +1311,9 @@ public:
// predicate controlling translation of CompareAndSwapX
bool needs_acquiring_load_exclusive(const Node *load);
+ // Assert that the given node is not a variable shift.
+ bool assert_not_var_shift(const Node* n);
+
// predicate controlling addressing modes
bool size_fits_all_mem_uses(AddPNode* addp, int shift);
%}
@@ -1725,6 +1728,12 @@ bool needs_acquiring_load_exclusive(const Node *n)
return true;
}
+// Assert that the given node is not a variable shift.
+bool assert_not_var_shift(const Node* n) {
+ assert(!n->as_ShiftV()->is_var_shift(), "illegal variable shift");
+ return true;
+}
+
#define __ _masm.
// advance declarations for helper functions to convert register
@@ -1853,6 +1862,10 @@ void MachPrologNode::format(PhaseRegAlloc *ra_, outputStream *st) const {
if (C->output()->need_stack_bang(framesize))
st->print("# stack bang size=%d\n\t", framesize);
+ if (VM_Version::use_rop_protection()) {
+ st->print("ldr zr, [lr]\n\t");
+ st->print("pacia lr, rfp\n\t");
+ }
if (framesize < ((1 << 9) + 2 * wordSize)) {
st->print("sub sp, sp, #%d\n\t", framesize);
st->print("stp rfp, lr, [sp, #%d]", framesize - 2 * wordSize);
@@ -1961,6 +1974,10 @@ void MachEpilogNode::format(PhaseRegAlloc *ra_, outputStream *st) const {
st->print("add sp, sp, rscratch1\n\t");
st->print("ldp lr, rfp, [sp],#%d\n\t", (2 * wordSize));
}
+ if (VM_Version::use_rop_protection()) {
+ st->print("autia lr, rfp\n\t");
+ st->print("ldr zr, [lr]\n\t");
+ }
if (do_polling() && C->is_method_compilation()) {
st->print("# test polling word\n\t");
@@ -3201,16 +3218,30 @@ encode %{
rscratch1, stlrb);
%}
+ enc_class aarch64_enc_stlrb0(memory mem) %{
+ MOV_VOLATILE(zr, $mem$$base, $mem$$index, $mem$$scale, $mem$$disp,
+ rscratch1, stlrb);
+ %}
+
enc_class aarch64_enc_stlrh(iRegI src, memory mem) %{
MOV_VOLATILE(as_Register($src$$reg), $mem$$base, $mem$$index, $mem$$scale, $mem$$disp,
rscratch1, stlrh);
%}
+ enc_class aarch64_enc_stlrh0(memory mem) %{
+ MOV_VOLATILE(zr, $mem$$base, $mem$$index, $mem$$scale, $mem$$disp,
+ rscratch1, stlrh);
+ %}
+
enc_class aarch64_enc_stlrw(iRegI src, memory mem) %{
MOV_VOLATILE(as_Register($src$$reg), $mem$$base, $mem$$index, $mem$$scale, $mem$$disp,
rscratch1, stlrw);
%}
+ enc_class aarch64_enc_stlrw0(memory mem) %{
+ MOV_VOLATILE(zr, $mem$$base, $mem$$index, $mem$$scale, $mem$$disp,
+ rscratch1, stlrw);
+ %}
enc_class aarch64_enc_ldarsbw(iRegI dst, memory mem) %{
Register dst_reg = as_Register($dst$$reg);
@@ -3301,6 +3332,11 @@ encode %{
rscratch1, stlr);
%}
+ enc_class aarch64_enc_stlr0(memory mem) %{
+ MOV_VOLATILE(zr, $mem$$base, $mem$$index, $mem$$scale, $mem$$disp,
+ rscratch1, stlr);
+ %}
+
enc_class aarch64_enc_fstlrs(vRegF src, memory mem) %{
{
C2_MacroAssembler _masm(&cbuf);
@@ -8275,6 +8311,18 @@ instruct storeB_volatile(iRegIorL2I src, /* sync_memory*/indirect mem)
ins_pipe(pipe_class_memory);
%}
+instruct storeimmB0_volatile(immI0 zero, /* sync_memory*/indirect mem)
+%{
+ match(Set mem (StoreB mem zero));
+
+ ins_cost(VOLATILE_REF_COST);
+ format %{ "stlrb zr, $mem\t# byte" %}
+
+ ins_encode(aarch64_enc_stlrb0(mem));
+
+ ins_pipe(pipe_class_memory);
+%}
+
// Store Char/Short
instruct storeC_volatile(iRegIorL2I src, /* sync_memory*/indirect mem)
%{
@@ -8288,6 +8336,18 @@ instruct storeC_volatile(iRegIorL2I src, /* sync_memory*/indirect mem)
ins_pipe(pipe_class_memory);
%}
+instruct storeimmC0_volatile(immI0 zero, /* sync_memory*/indirect mem)
+%{
+ match(Set mem (StoreC mem zero));
+
+ ins_cost(VOLATILE_REF_COST);
+ format %{ "stlrh zr, $mem\t# short" %}
+
+ ins_encode(aarch64_enc_stlrh0(mem));
+
+ ins_pipe(pipe_class_memory);
+%}
+
// Store Integer
instruct storeI_volatile(iRegIorL2I src, /* sync_memory*/indirect mem)
@@ -8302,6 +8362,18 @@ instruct storeI_volatile(iRegIorL2I src, /* sync_memory*/indirect mem)
ins_pipe(pipe_class_memory);
%}
+instruct storeimmI0_volatile(immI0 zero, /* sync_memory*/indirect mem)
+%{
+ match(Set mem(StoreI mem zero));
+
+ ins_cost(VOLATILE_REF_COST);
+ format %{ "stlrw zr, $mem\t# int" %}
+
+ ins_encode(aarch64_enc_stlrw0(mem));
+
+ ins_pipe(pipe_class_memory);
+%}
+
// Store Long (64 bit signed)
instruct storeL_volatile(iRegL src, /* sync_memory*/indirect mem)
%{
@@ -8315,6 +8387,18 @@ instruct storeL_volatile(iRegL src, /* sync_memory*/indirect mem)
ins_pipe(pipe_class_memory);
%}
+instruct storeimmL0_volatile(immL0 zero, /* sync_memory*/indirect mem)
+%{
+ match(Set mem (StoreL mem zero));
+
+ ins_cost(VOLATILE_REF_COST);
+ format %{ "stlr zr, $mem\t# int" %}
+
+ ins_encode(aarch64_enc_stlr0(mem));
+
+ ins_pipe(pipe_class_memory);
+%}
+
// Store Pointer
instruct storeP_volatile(iRegP src, /* sync_memory*/indirect mem)
%{
@@ -8328,6 +8412,18 @@ instruct storeP_volatile(iRegP src, /* sync_memory*/indirect mem)
ins_pipe(pipe_class_memory);
%}
+instruct storeimmP0_volatile(immP0 zero, /* sync_memory*/indirect mem)
+%{
+ match(Set mem (StoreP mem zero));
+
+ ins_cost(VOLATILE_REF_COST);
+ format %{ "stlr zr, $mem\t# ptr" %}
+
+ ins_encode(aarch64_enc_stlr0(mem));
+
+ ins_pipe(pipe_class_memory);
+%}
+
// Store Compressed Pointer
instruct storeN_volatile(iRegN src, /* sync_memory*/indirect mem)
%{
@@ -8341,6 +8437,18 @@ instruct storeN_volatile(iRegN src, /* sync_memory*/indirect mem)
ins_pipe(pipe_class_memory);
%}
+instruct storeimmN0_volatile(immN0 zero, /* sync_memory*/indirect mem)
+%{
+ match(Set mem (StoreN mem zero));
+
+ ins_cost(VOLATILE_REF_COST);
+ format %{ "stlrw zr, $mem\t# compressed ptr" %}
+
+ ins_encode(aarch64_enc_stlrw0(mem));
+
+ ins_pipe(pipe_class_memory);
+%}
+
// Store Float
instruct storeF_volatile(vRegF src, /* sync_memory*/indirect mem)
%{
@@ -16972,13 +17080,13 @@ instruct array_equalsC(iRegP_R1 ary1, iRegP_R2 ary2, iRegI_R0 result,
ins_pipe(pipe_class_memory);
%}
-instruct has_negatives(iRegP_R1 ary1, iRegI_R2 len, iRegI_R0 result, rFlagsReg cr)
+instruct count_positives(iRegP_R1 ary1, iRegI_R2 len, iRegI_R0 result, rFlagsReg cr)
%{
- match(Set result (HasNegatives ary1 len));
+ match(Set result (CountPositives ary1 len));
effect(USE_KILL ary1, USE_KILL len, KILL cr);
- format %{ "has negatives byte[] $ary1,$len -> $result" %}
+ format %{ "count positives byte[] $ary1,$len -> $result" %}
ins_encode %{
- address tpc = __ has_negatives($ary1$$Register, $len$$Register, $result$$Register);
+ address tpc = __ count_positives($ary1$$Register, $len$$Register, $result$$Register);
if (tpc == NULL) {
ciEnv::current()->record_failure("CodeCache is full");
return;
diff --git a/src/hotspot/cpu/aarch64/aarch64_neon.ad b/src/hotspot/cpu/aarch64/aarch64_neon.ad
index 7c84a93583b10a8e9ff5c4526c84970f3e02419f..feecd8ab90add28181a2c7b2bd615432703389b9 100644
--- a/src/hotspot/cpu/aarch64/aarch64_neon.ad
+++ b/src/hotspot/cpu/aarch64/aarch64_neon.ad
@@ -1,5 +1,5 @@
-// Copyright (c) 2020, 2021, Oracle and/or its affiliates. All rights reserved.
-// Copyright (c) 2020, 2021, Arm Limited. All rights reserved.
+// Copyright (c) 2020, 2022, Oracle and/or its affiliates. All rights reserved.
+// Copyright (c) 2020, 2022, Arm Limited. All rights reserved.
// DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
//
// This code is free software; you can redistribute it and/or modify it
@@ -4400,11 +4400,17 @@ instruct vxor16B(vecX dst, vecX src1, vecX src2)
// ------------------------------ Shift ---------------------------------------
-instruct vshiftcnt8B(vecD dst, iRegIorL2I cnt) %{
+// Vector shift count
+// Note-1: Low 8 bits of each element are used, so it doesn't matter if we
+// treat it as ints or bytes here.
+// Note-2: Shift value is negated for RShiftCntV additionally. See the comments
+// on vsra8B rule for more details.
+
+instruct vslcnt8B(vecD dst, iRegIorL2I cnt) %{
predicate(UseSVE == 0 && (n->as_Vector()->length_in_bytes() == 4 ||
- n->as_Vector()->length_in_bytes() == 8));
+ n->as_Vector()->length_in_bytes() == 8));
match(Set dst (LShiftCntV cnt));
- match(Set dst (RShiftCntV cnt));
+ ins_cost(INSN_COST);
format %{ "dup $dst, $cnt\t# shift count vector (8B)" %}
ins_encode %{
__ dup(as_FloatRegister($dst$$reg), __ T8B, as_Register($cnt$$reg));
@@ -4412,10 +4418,10 @@ instruct vshiftcnt8B(vecD dst, iRegIorL2I cnt) %{
ins_pipe(vdup_reg_reg64);
%}
-instruct vshiftcnt16B(vecX dst, iRegIorL2I cnt) %{
- predicate(UseSVE == 0 && (n->as_Vector()->length_in_bytes() == 16));
+instruct vslcnt16B(vecX dst, iRegIorL2I cnt) %{
+ predicate(UseSVE == 0 && n->as_Vector()->length_in_bytes() == 16);
match(Set dst (LShiftCntV cnt));
- match(Set dst (RShiftCntV cnt));
+ ins_cost(INSN_COST);
format %{ "dup $dst, $cnt\t# shift count vector (16B)" %}
ins_encode %{
__ dup(as_FloatRegister($dst$$reg), __ T16B, as_Register($cnt$$reg));
@@ -4423,9 +4429,35 @@ instruct vshiftcnt16B(vecX dst, iRegIorL2I cnt) %{
ins_pipe(vdup_reg_reg128);
%}
+instruct vsrcnt8B(vecD dst, iRegIorL2I cnt) %{
+ predicate(UseSVE == 0 && (n->as_Vector()->length_in_bytes() == 4 ||
+ n->as_Vector()->length_in_bytes() == 8));
+ match(Set dst (RShiftCntV cnt));
+ ins_cost(INSN_COST * 2);
+ format %{ "negw rscratch1, $cnt\t"
+ "dup $dst, rscratch1\t# shift count vector (8B)" %}
+ ins_encode %{
+ __ negw(rscratch1, as_Register($cnt$$reg));
+ __ dup(as_FloatRegister($dst$$reg), __ T8B, rscratch1);
+ %}
+ ins_pipe(vdup_reg_reg64);
+%}
+
+instruct vsrcnt16B(vecX dst, iRegIorL2I cnt) %{
+ predicate(UseSVE == 0 && n->as_Vector()->length_in_bytes() == 16);
+ match(Set dst (RShiftCntV cnt));
+ ins_cost(INSN_COST * 2);
+ format %{ "negw rscratch1, $cnt\t"
+ "dup $dst, rscratch1\t# shift count vector (16B)" %}
+ ins_encode %{
+ __ negw(rscratch1, as_Register($cnt$$reg));
+ __ dup(as_FloatRegister($dst$$reg), __ T16B, rscratch1);
+ %}
+ ins_pipe(vdup_reg_reg128);
+%}
+
instruct vsll8B(vecD dst, vecD src, vecD shift) %{
- predicate(n->as_Vector()->length() == 4 ||
- n->as_Vector()->length() == 8);
+ predicate(n->as_Vector()->length() == 4 || n->as_Vector()->length() == 8);
match(Set dst (LShiftVB src shift));
ins_cost(INSN_COST);
format %{ "sshl $dst,$src,$shift\t# vector (8B)" %}
@@ -4459,8 +4491,6 @@ instruct vsll16B(vecX dst, vecX src, vecX shift) %{
// LoadVector RShiftCntV
// | /
// RShiftVI
-// Note: In inner loop, multiple neg instructions are used, which can be
-// moved to outer loop and merge into one neg instruction.
//
// Case 2: The vector shift count is from loading.
// This case isn't supported by middle-end now. But it's supported by
@@ -4470,83 +4500,145 @@ instruct vsll16B(vecX dst, vecX src, vecX shift) %{
// | /
// RShiftVI
//
+// The negate is conducted in RShiftCntV rule for case 1, whereas it's done in
+// RShiftV* rules for case 2. Because there exists an optimization opportunity
+// for case 1, that is, multiple neg instructions in inner loop can be hoisted
+// to outer loop and merged into one neg instruction.
+//
+// Note that ShiftVNode::is_var_shift() indicates whether the vector shift
+// count is a variable vector(case 2) or not(a vector generated by RShiftCntV,
+// i.e. case 1).
-instruct vsra8B(vecD dst, vecD src, vecD shift, vecD tmp) %{
- predicate(n->as_Vector()->length() == 4 ||
- n->as_Vector()->length() == 8);
+instruct vsra8B(vecD dst, vecD src, vecD shift) %{
+ predicate((n->as_Vector()->length() == 4 || n->as_Vector()->length() == 8) &&
+ !n->as_ShiftV()->is_var_shift());
match(Set dst (RShiftVB src shift));
ins_cost(INSN_COST);
- effect(TEMP tmp);
- format %{ "negr $tmp,$shift\t"
- "sshl $dst,$src,$tmp\t# vector (8B)" %}
+ format %{ "sshl $dst,$src,$shift\t# vector (8B)" %}
+ ins_encode %{
+ __ sshl(as_FloatRegister($dst$$reg), __ T8B,
+ as_FloatRegister($src$$reg),
+ as_FloatRegister($shift$$reg));
+ %}
+ ins_pipe(vshift64);
+%}
+
+instruct vsra8B_var(vecD dst, vecD src, vecD shift) %{
+ predicate((n->as_Vector()->length() == 4 || n->as_Vector()->length() == 8) &&
+ n->as_ShiftV()->is_var_shift());
+ match(Set dst (RShiftVB src shift));
+ ins_cost(INSN_COST * 2);
+ effect(TEMP_DEF dst);
+ format %{ "negr $dst,$shift\t"
+ "sshl $dst,$src,$dst\t# vector (8B)" %}
ins_encode %{
- __ negr(as_FloatRegister($tmp$$reg), __ T8B,
+ __ negr(as_FloatRegister($dst$$reg), __ T8B,
as_FloatRegister($shift$$reg));
__ sshl(as_FloatRegister($dst$$reg), __ T8B,
as_FloatRegister($src$$reg),
- as_FloatRegister($tmp$$reg));
+ as_FloatRegister($dst$$reg));
%}
ins_pipe(vshift64);
%}
-instruct vsra16B(vecX dst, vecX src, vecX shift, vecX tmp) %{
- predicate(n->as_Vector()->length() == 16);
+instruct vsra16B(vecX dst, vecX src, vecX shift) %{
+ predicate(n->as_Vector()->length() == 16 && !n->as_ShiftV()->is_var_shift());
match(Set dst (RShiftVB src shift));
ins_cost(INSN_COST);
- effect(TEMP tmp);
- format %{ "negr $tmp,$shift\t"
- "sshl $dst,$src,$tmp\t# vector (16B)" %}
+ format %{ "sshl $dst,$src,$shift\t# vector (16B)" %}
ins_encode %{
- __ negr(as_FloatRegister($tmp$$reg), __ T16B,
+ __ sshl(as_FloatRegister($dst$$reg), __ T16B,
+ as_FloatRegister($src$$reg),
+ as_FloatRegister($shift$$reg));
+ %}
+ ins_pipe(vshift128);
+%}
+
+instruct vsra16B_var(vecX dst, vecX src, vecX shift) %{
+ predicate(n->as_Vector()->length() == 16 && n->as_ShiftV()->is_var_shift());
+ match(Set dst (RShiftVB src shift));
+ ins_cost(INSN_COST * 2);
+ effect(TEMP_DEF dst);
+ format %{ "negr $dst,$shift\t"
+ "sshl $dst,$src,$dst\t# vector (16B)" %}
+ ins_encode %{
+ __ negr(as_FloatRegister($dst$$reg), __ T16B,
as_FloatRegister($shift$$reg));
__ sshl(as_FloatRegister($dst$$reg), __ T16B,
as_FloatRegister($src$$reg),
- as_FloatRegister($tmp$$reg));
+ as_FloatRegister($dst$$reg));
%}
ins_pipe(vshift128);
%}
-instruct vsrl8B(vecD dst, vecD src, vecD shift, vecD tmp) %{
- predicate(n->as_Vector()->length() == 4 ||
- n->as_Vector()->length() == 8);
+instruct vsrl8B(vecD dst, vecD src, vecD shift) %{
+ predicate((n->as_Vector()->length() == 4 || n->as_Vector()->length() == 8) &&
+ !n->as_ShiftV()->is_var_shift());
match(Set dst (URShiftVB src shift));
ins_cost(INSN_COST);
- effect(TEMP tmp);
- format %{ "negr $tmp,$shift\t"
- "ushl $dst,$src,$tmp\t# vector (8B)" %}
+ format %{ "ushl $dst,$src,$shift\t# vector (8B)" %}
ins_encode %{
- __ negr(as_FloatRegister($tmp$$reg), __ T8B,
+ __ ushl(as_FloatRegister($dst$$reg), __ T8B,
+ as_FloatRegister($src$$reg),
+ as_FloatRegister($shift$$reg));
+ %}
+ ins_pipe(vshift64);
+%}
+
+instruct vsrl8B_var(vecD dst, vecD src, vecD shift) %{
+ predicate((n->as_Vector()->length() == 4 || n->as_Vector()->length() == 8) &&
+ n->as_ShiftV()->is_var_shift());
+ match(Set dst (URShiftVB src shift));
+ ins_cost(INSN_COST * 2);
+ effect(TEMP_DEF dst);
+ format %{ "negr $dst,$shift\t"
+ "ushl $dst,$src,$dst\t# vector (8B)" %}
+ ins_encode %{
+ __ negr(as_FloatRegister($dst$$reg), __ T8B,
as_FloatRegister($shift$$reg));
__ ushl(as_FloatRegister($dst$$reg), __ T8B,
as_FloatRegister($src$$reg),
- as_FloatRegister($tmp$$reg));
+ as_FloatRegister($dst$$reg));
%}
ins_pipe(vshift64);
%}
-instruct vsrl16B(vecX dst, vecX src, vecX shift, vecX tmp) %{
- predicate(n->as_Vector()->length() == 16);
+instruct vsrl16B(vecX dst, vecX src, vecX shift) %{
+ predicate(n->as_Vector()->length() == 16 && !n->as_ShiftV()->is_var_shift());
match(Set dst (URShiftVB src shift));
ins_cost(INSN_COST);
- effect(TEMP tmp);
- format %{ "negr $tmp,$shift\t"
- "ushl $dst,$src,$tmp\t# vector (16B)" %}
+ format %{ "ushl $dst,$src,$shift\t# vector (16B)" %}
+ ins_encode %{
+ __ ushl(as_FloatRegister($dst$$reg), __ T16B,
+ as_FloatRegister($src$$reg),
+ as_FloatRegister($shift$$reg));
+ %}
+ ins_pipe(vshift128);
+%}
+
+instruct vsrl16B_var(vecX dst, vecX src, vecX shift) %{
+ predicate(n->as_Vector()->length() == 16 && n->as_ShiftV()->is_var_shift());
+ match(Set dst (URShiftVB src shift));
+ ins_cost(INSN_COST * 2);
+ effect(TEMP_DEF dst);
+ format %{ "negr $dst,$shift\t"
+ "ushl $dst,$src,$dst\t# vector (16B)" %}
ins_encode %{
- __ negr(as_FloatRegister($tmp$$reg), __ T16B,
+ __ negr(as_FloatRegister($dst$$reg), __ T16B,
as_FloatRegister($shift$$reg));
__ ushl(as_FloatRegister($dst$$reg), __ T16B,
as_FloatRegister($src$$reg),
- as_FloatRegister($tmp$$reg));
+ as_FloatRegister($dst$$reg));
%}
ins_pipe(vshift128);
%}
instruct vsll8B_imm(vecD dst, vecD src, immI shift) %{
- predicate(n->as_Vector()->length() == 4 ||
- n->as_Vector()->length() == 8);
+ predicate((n->as_Vector()->length() == 4 || n->as_Vector()->length() == 8) &&
+ assert_not_var_shift(n));
match(Set dst (LShiftVB src (LShiftCntV shift)));
ins_cost(INSN_COST);
- format %{ "shl $dst, $src, $shift\t# vector (8B)" %}
+ format %{ "shl $dst, $src, $shift\t# vector (8B)" %}
ins_encode %{
int sh = (int)$shift$$constant;
if (sh >= 8) {
@@ -4562,10 +4654,10 @@ instruct vsll8B_imm(vecD dst, vecD src, immI shift) %{
%}
instruct vsll16B_imm(vecX dst, vecX src, immI shift) %{
- predicate(n->as_Vector()->length() == 16);
+ predicate(n->as_Vector()->length() == 16 && assert_not_var_shift(n));
match(Set dst (LShiftVB src (LShiftCntV shift)));
ins_cost(INSN_COST);
- format %{ "shl $dst, $src, $shift\t# vector (16B)" %}
+ format %{ "shl $dst, $src, $shift\t# vector (16B)" %}
ins_encode %{
int sh = (int)$shift$$constant;
if (sh >= 8) {
@@ -4581,40 +4673,40 @@ instruct vsll16B_imm(vecX dst, vecX src, immI shift) %{
%}
instruct vsra8B_imm(vecD dst, vecD src, immI shift) %{
- predicate(n->as_Vector()->length() == 4 ||
- n->as_Vector()->length() == 8);
+ predicate((n->as_Vector()->length() == 4 || n->as_Vector()->length() == 8) &&
+ assert_not_var_shift(n));
match(Set dst (RShiftVB src (RShiftCntV shift)));
ins_cost(INSN_COST);
- format %{ "sshr $dst, $src, $shift\t# vector (8B)" %}
+ format %{ "sshr $dst, $src, $shift\t# vector (8B)" %}
ins_encode %{
int sh = (int)$shift$$constant;
if (sh >= 8) sh = 7;
__ sshr(as_FloatRegister($dst$$reg), __ T8B,
- as_FloatRegister($src$$reg), sh);
+ as_FloatRegister($src$$reg), sh);
%}
ins_pipe(vshift64_imm);
%}
instruct vsra16B_imm(vecX dst, vecX src, immI shift) %{
- predicate(n->as_Vector()->length() == 16);
+ predicate(n->as_Vector()->length() == 16 && assert_not_var_shift(n));
match(Set dst (RShiftVB src (RShiftCntV shift)));
ins_cost(INSN_COST);
- format %{ "sshr $dst, $src, $shift\t# vector (16B)" %}
+ format %{ "sshr $dst, $src, $shift\t# vector (16B)" %}
ins_encode %{
int sh = (int)$shift$$constant;
if (sh >= 8) sh = 7;
__ sshr(as_FloatRegister($dst$$reg), __ T16B,
- as_FloatRegister($src$$reg), sh);
+ as_FloatRegister($src$$reg), sh);
%}
ins_pipe(vshift128_imm);
%}
instruct vsrl8B_imm(vecD dst, vecD src, immI shift) %{
- predicate(n->as_Vector()->length() == 4 ||
- n->as_Vector()->length() == 8);
+ predicate((n->as_Vector()->length() == 4 || n->as_Vector()->length() == 8) &&
+ assert_not_var_shift(n));
match(Set dst (URShiftVB src (RShiftCntV shift)));
ins_cost(INSN_COST);
- format %{ "ushr $dst, $src, $shift\t# vector (8B)" %}
+ format %{ "ushr $dst, $src, $shift\t# vector (8B)" %}
ins_encode %{
int sh = (int)$shift$$constant;
if (sh >= 8) {
@@ -4623,17 +4715,17 @@ instruct vsrl8B_imm(vecD dst, vecD src, immI shift) %{
as_FloatRegister($src$$reg));
} else {
__ ushr(as_FloatRegister($dst$$reg), __ T8B,
- as_FloatRegister($src$$reg), sh);
+ as_FloatRegister($src$$reg), sh);
}
%}
ins_pipe(vshift64_imm);
%}
instruct vsrl16B_imm(vecX dst, vecX src, immI shift) %{
- predicate(n->as_Vector()->length() == 16);
+ predicate(n->as_Vector()->length() == 16 && assert_not_var_shift(n));
match(Set dst (URShiftVB src (RShiftCntV shift)));
ins_cost(INSN_COST);
- format %{ "ushr $dst, $src, $shift\t# vector (16B)" %}
+ format %{ "ushr $dst, $src, $shift\t# vector (16B)" %}
ins_encode %{
int sh = (int)$shift$$constant;
if (sh >= 8) {
@@ -4642,15 +4734,14 @@ instruct vsrl16B_imm(vecX dst, vecX src, immI shift) %{
as_FloatRegister($src$$reg));
} else {
__ ushr(as_FloatRegister($dst$$reg), __ T16B,
- as_FloatRegister($src$$reg), sh);
+ as_FloatRegister($src$$reg), sh);
}
%}
ins_pipe(vshift128_imm);
%}
instruct vsll4S(vecD dst, vecD src, vecD shift) %{
- predicate(n->as_Vector()->length() == 2 ||
- n->as_Vector()->length() == 4);
+ predicate(n->as_Vector()->length() == 2 || n->as_Vector()->length() == 4);
match(Set dst (LShiftVS src shift));
ins_cost(INSN_COST);
format %{ "sshl $dst,$src,$shift\t# vector (4H)" %}
@@ -4675,82 +4766,136 @@ instruct vsll8S(vecX dst, vecX src, vecX shift) %{
ins_pipe(vshift128);
%}
-instruct vsra4S(vecD dst, vecD src, vecD shift, vecD tmp) %{
- predicate(n->as_Vector()->length() == 2 ||
- n->as_Vector()->length() == 4);
+instruct vsra4S(vecD dst, vecD src, vecD shift) %{
+ predicate((n->as_Vector()->length() == 2 || n->as_Vector()->length() == 4) &&
+ !n->as_ShiftV()->is_var_shift());
match(Set dst (RShiftVS src shift));
ins_cost(INSN_COST);
- effect(TEMP tmp);
- format %{ "negr $tmp,$shift\t"
- "sshl $dst,$src,$tmp\t# vector (4H)" %}
+ format %{ "sshl $dst,$src,$shift\t# vector (4H)" %}
+ ins_encode %{
+ __ sshl(as_FloatRegister($dst$$reg), __ T4H,
+ as_FloatRegister($src$$reg),
+ as_FloatRegister($shift$$reg));
+ %}
+ ins_pipe(vshift64);
+%}
+
+instruct vsra4S_var(vecD dst, vecD src, vecD shift) %{
+ predicate((n->as_Vector()->length() == 2 || n->as_Vector()->length() == 4) &&
+ n->as_ShiftV()->is_var_shift());
+ match(Set dst (RShiftVS src shift));
+ ins_cost(INSN_COST * 2);
+ effect(TEMP_DEF dst);
+ format %{ "negr $dst,$shift\t"
+ "sshl $dst,$src,$dst\t# vector (4H)" %}
ins_encode %{
- __ negr(as_FloatRegister($tmp$$reg), __ T8B,
+ __ negr(as_FloatRegister($dst$$reg), __ T8B,
as_FloatRegister($shift$$reg));
__ sshl(as_FloatRegister($dst$$reg), __ T4H,
as_FloatRegister($src$$reg),
- as_FloatRegister($tmp$$reg));
+ as_FloatRegister($dst$$reg));
%}
ins_pipe(vshift64);
%}
-instruct vsra8S(vecX dst, vecX src, vecX shift, vecX tmp) %{
- predicate(n->as_Vector()->length() == 8);
+instruct vsra8S(vecX dst, vecX src, vecX shift) %{
+ predicate(n->as_Vector()->length() == 8 && !n->as_ShiftV()->is_var_shift());
match(Set dst (RShiftVS src shift));
ins_cost(INSN_COST);
- effect(TEMP tmp);
- format %{ "negr $tmp,$shift\t"
- "sshl $dst,$src,$tmp\t# vector (8H)" %}
+ format %{ "sshl $dst,$src,$shift\t# vector (8H)" %}
ins_encode %{
- __ negr(as_FloatRegister($tmp$$reg), __ T16B,
+ __ sshl(as_FloatRegister($dst$$reg), __ T8H,
+ as_FloatRegister($src$$reg),
+ as_FloatRegister($shift$$reg));
+ %}
+ ins_pipe(vshift128);
+%}
+
+instruct vsra8S_var(vecX dst, vecX src, vecX shift) %{
+ predicate(n->as_Vector()->length() == 8 && n->as_ShiftV()->is_var_shift());
+ match(Set dst (RShiftVS src shift));
+ ins_cost(INSN_COST * 2);
+ effect(TEMP_DEF dst);
+ format %{ "negr $dst,$shift\t"
+ "sshl $dst,$src,$dst\t# vector (8H)" %}
+ ins_encode %{
+ __ negr(as_FloatRegister($dst$$reg), __ T16B,
as_FloatRegister($shift$$reg));
__ sshl(as_FloatRegister($dst$$reg), __ T8H,
as_FloatRegister($src$$reg),
- as_FloatRegister($tmp$$reg));
+ as_FloatRegister($dst$$reg));
%}
ins_pipe(vshift128);
%}
-instruct vsrl4S(vecD dst, vecD src, vecD shift, vecD tmp) %{
- predicate(n->as_Vector()->length() == 2 ||
- n->as_Vector()->length() == 4);
+instruct vsrl4S(vecD dst, vecD src, vecD shift) %{
+ predicate((n->as_Vector()->length() == 2 || n->as_Vector()->length() == 4) &&
+ !n->as_ShiftV()->is_var_shift());
match(Set dst (URShiftVS src shift));
ins_cost(INSN_COST);
- effect(TEMP tmp);
- format %{ "negr $tmp,$shift\t"
- "ushl $dst,$src,$tmp\t# vector (4H)" %}
+ format %{ "ushl $dst,$src,$shift\t# vector (4H)" %}
+ ins_encode %{
+ __ ushl(as_FloatRegister($dst$$reg), __ T4H,
+ as_FloatRegister($src$$reg),
+ as_FloatRegister($shift$$reg));
+ %}
+ ins_pipe(vshift64);
+%}
+
+instruct vsrl4S_var(vecD dst, vecD src, vecD shift) %{
+ predicate((n->as_Vector()->length() == 2 || n->as_Vector()->length() == 4) &&
+ n->as_ShiftV()->is_var_shift());
+ match(Set dst (URShiftVS src shift));
+ ins_cost(INSN_COST * 2);
+ effect(TEMP_DEF dst);
+ format %{ "negr $dst,$shift\t"
+ "ushl $dst,$src,$dst\t# vector (4H)" %}
ins_encode %{
- __ negr(as_FloatRegister($tmp$$reg), __ T8B,
+ __ negr(as_FloatRegister($dst$$reg), __ T8B,
as_FloatRegister($shift$$reg));
__ ushl(as_FloatRegister($dst$$reg), __ T4H,
as_FloatRegister($src$$reg),
- as_FloatRegister($tmp$$reg));
+ as_FloatRegister($dst$$reg));
%}
ins_pipe(vshift64);
%}
-instruct vsrl8S(vecX dst, vecX src, vecX shift, vecX tmp) %{
- predicate(n->as_Vector()->length() == 8);
+instruct vsrl8S(vecX dst, vecX src, vecX shift) %{
+ predicate(n->as_Vector()->length() == 8 && !n->as_ShiftV()->is_var_shift());
match(Set dst (URShiftVS src shift));
ins_cost(INSN_COST);
- effect(TEMP tmp);
- format %{ "negr $tmp,$shift\t"
- "ushl $dst,$src,$tmp\t# vector (8H)" %}
+ format %{ "ushl $dst,$src,$shift\t# vector (8H)" %}
ins_encode %{
- __ negr(as_FloatRegister($tmp$$reg), __ T16B,
+ __ ushl(as_FloatRegister($dst$$reg), __ T8H,
+ as_FloatRegister($src$$reg),
+ as_FloatRegister($shift$$reg));
+ %}
+ ins_pipe(vshift128);
+%}
+
+instruct vsrl8S_var(vecX dst, vecX src, vecX shift) %{
+ predicate(n->as_Vector()->length() == 8 && n->as_ShiftV()->is_var_shift());
+ match(Set dst (URShiftVS src shift));
+ ins_cost(INSN_COST * 2);
+ effect(TEMP_DEF dst);
+ format %{ "negr $dst,$shift\t"
+ "ushl $dst,$src,$dst\t# vector (8H)" %}
+ ins_encode %{
+ __ negr(as_FloatRegister($dst$$reg), __ T16B,
as_FloatRegister($shift$$reg));
__ ushl(as_FloatRegister($dst$$reg), __ T8H,
as_FloatRegister($src$$reg),
- as_FloatRegister($tmp$$reg));
+ as_FloatRegister($dst$$reg));
%}
ins_pipe(vshift128);
%}
instruct vsll4S_imm(vecD dst, vecD src, immI shift) %{
- predicate(n->as_Vector()->length() == 2 ||
- n->as_Vector()->length() == 4);
+ predicate((n->as_Vector()->length() == 2 || n->as_Vector()->length() == 4) &&
+ assert_not_var_shift(n));
match(Set dst (LShiftVS src (LShiftCntV shift)));
ins_cost(INSN_COST);
- format %{ "shl $dst, $src, $shift\t# vector (4H)" %}
+ format %{ "shl $dst, $src, $shift\t# vector (4H)" %}
ins_encode %{
int sh = (int)$shift$$constant;
if (sh >= 16) {
@@ -4766,10 +4911,10 @@ instruct vsll4S_imm(vecD dst, vecD src, immI shift) %{
%}
instruct vsll8S_imm(vecX dst, vecX src, immI shift) %{
- predicate(n->as_Vector()->length() == 8);
+ predicate(n->as_Vector()->length() == 8 && assert_not_var_shift(n));
match(Set dst (LShiftVS src (LShiftCntV shift)));
ins_cost(INSN_COST);
- format %{ "shl $dst, $src, $shift\t# vector (8H)" %}
+ format %{ "shl $dst, $src, $shift\t# vector (8H)" %}
ins_encode %{
int sh = (int)$shift$$constant;
if (sh >= 16) {
@@ -4785,40 +4930,40 @@ instruct vsll8S_imm(vecX dst, vecX src, immI shift) %{
%}
instruct vsra4S_imm(vecD dst, vecD src, immI shift) %{
- predicate(n->as_Vector()->length() == 2 ||
- n->as_Vector()->length() == 4);
+ predicate((n->as_Vector()->length() == 2 || n->as_Vector()->length() == 4) &&
+ assert_not_var_shift(n));
match(Set dst (RShiftVS src (RShiftCntV shift)));
ins_cost(INSN_COST);
- format %{ "sshr $dst, $src, $shift\t# vector (4H)" %}
+ format %{ "sshr $dst, $src, $shift\t# vector (4H)" %}
ins_encode %{
int sh = (int)$shift$$constant;
if (sh >= 16) sh = 15;
__ sshr(as_FloatRegister($dst$$reg), __ T4H,
- as_FloatRegister($src$$reg), sh);
+ as_FloatRegister($src$$reg), sh);
%}
ins_pipe(vshift64_imm);
%}
instruct vsra8S_imm(vecX dst, vecX src, immI shift) %{
- predicate(n->as_Vector()->length() == 8);
+ predicate(n->as_Vector()->length() == 8 && assert_not_var_shift(n));
match(Set dst (RShiftVS src (RShiftCntV shift)));
ins_cost(INSN_COST);
- format %{ "sshr $dst, $src, $shift\t# vector (8H)" %}
+ format %{ "sshr $dst, $src, $shift\t# vector (8H)" %}
ins_encode %{
int sh = (int)$shift$$constant;
if (sh >= 16) sh = 15;
__ sshr(as_FloatRegister($dst$$reg), __ T8H,
- as_FloatRegister($src$$reg), sh);
+ as_FloatRegister($src$$reg), sh);
%}
ins_pipe(vshift128_imm);
%}
instruct vsrl4S_imm(vecD dst, vecD src, immI shift) %{
- predicate(n->as_Vector()->length() == 2 ||
- n->as_Vector()->length() == 4);
+ predicate((n->as_Vector()->length() == 2 || n->as_Vector()->length() == 4) &&
+ assert_not_var_shift(n));
match(Set dst (URShiftVS src (RShiftCntV shift)));
ins_cost(INSN_COST);
- format %{ "ushr $dst, $src, $shift\t# vector (4H)" %}
+ format %{ "ushr $dst, $src, $shift\t# vector (4H)" %}
ins_encode %{
int sh = (int)$shift$$constant;
if (sh >= 16) {
@@ -4827,17 +4972,17 @@ instruct vsrl4S_imm(vecD dst, vecD src, immI shift) %{
as_FloatRegister($src$$reg));
} else {
__ ushr(as_FloatRegister($dst$$reg), __ T4H,
- as_FloatRegister($src$$reg), sh);
+ as_FloatRegister($src$$reg), sh);
}
%}
ins_pipe(vshift64_imm);
%}
instruct vsrl8S_imm(vecX dst, vecX src, immI shift) %{
- predicate(n->as_Vector()->length() == 8);
+ predicate(n->as_Vector()->length() == 8 && assert_not_var_shift(n));
match(Set dst (URShiftVS src (RShiftCntV shift)));
ins_cost(INSN_COST);
- format %{ "ushr $dst, $src, $shift\t# vector (8H)" %}
+ format %{ "ushr $dst, $src, $shift\t# vector (8H)" %}
ins_encode %{
int sh = (int)$shift$$constant;
if (sh >= 16) {
@@ -4846,7 +4991,7 @@ instruct vsrl8S_imm(vecX dst, vecX src, immI shift) %{
as_FloatRegister($src$$reg));
} else {
__ ushr(as_FloatRegister($dst$$reg), __ T8H,
- as_FloatRegister($src$$reg), sh);
+ as_FloatRegister($src$$reg), sh);
}
%}
ins_pipe(vshift128_imm);
@@ -4878,79 +5023,131 @@ instruct vsll4I(vecX dst, vecX src, vecX shift) %{
ins_pipe(vshift128);
%}
-instruct vsra2I(vecD dst, vecD src, vecD shift, vecD tmp) %{
- predicate(n->as_Vector()->length() == 2);
+instruct vsra2I(vecD dst, vecD src, vecD shift) %{
+ predicate(n->as_Vector()->length() == 2 && !n->as_ShiftV()->is_var_shift());
match(Set dst (RShiftVI src shift));
ins_cost(INSN_COST);
- effect(TEMP tmp);
- format %{ "negr $tmp,$shift\t"
- "sshl $dst,$src,$tmp\t# vector (2S)" %}
+ format %{ "sshl $dst,$src,$shift\t# vector (2S)" %}
+ ins_encode %{
+ __ sshl(as_FloatRegister($dst$$reg), __ T2S,
+ as_FloatRegister($src$$reg),
+ as_FloatRegister($shift$$reg));
+ %}
+ ins_pipe(vshift64);
+%}
+
+instruct vsra2I_var(vecD dst, vecD src, vecD shift) %{
+ predicate(n->as_Vector()->length() == 2 && n->as_ShiftV()->is_var_shift());
+ match(Set dst (RShiftVI src shift));
+ ins_cost(INSN_COST * 2);
+ effect(TEMP_DEF dst);
+ format %{ "negr $dst,$shift\t"
+ "sshl $dst,$src,$dst\t# vector (2S)" %}
ins_encode %{
- __ negr(as_FloatRegister($tmp$$reg), __ T8B,
+ __ negr(as_FloatRegister($dst$$reg), __ T8B,
as_FloatRegister($shift$$reg));
__ sshl(as_FloatRegister($dst$$reg), __ T2S,
as_FloatRegister($src$$reg),
- as_FloatRegister($tmp$$reg));
+ as_FloatRegister($dst$$reg));
%}
ins_pipe(vshift64);
%}
-instruct vsra4I(vecX dst, vecX src, vecX shift, vecX tmp) %{
- predicate(n->as_Vector()->length() == 4);
+instruct vsra4I(vecX dst, vecX src, vecX shift) %{
+ predicate(n->as_Vector()->length() == 4 && !n->as_ShiftV()->is_var_shift());
match(Set dst (RShiftVI src shift));
ins_cost(INSN_COST);
- effect(TEMP tmp);
- format %{ "negr $tmp,$shift\t"
- "sshl $dst,$src,$tmp\t# vector (4S)" %}
+ format %{ "sshl $dst,$src,$shift\t# vector (4S)" %}
ins_encode %{
- __ negr(as_FloatRegister($tmp$$reg), __ T16B,
+ __ sshl(as_FloatRegister($dst$$reg), __ T4S,
+ as_FloatRegister($src$$reg),
+ as_FloatRegister($shift$$reg));
+ %}
+ ins_pipe(vshift128);
+%}
+
+instruct vsra4I_var(vecX dst, vecX src, vecX shift) %{
+ predicate(n->as_Vector()->length() == 4 && n->as_ShiftV()->is_var_shift());
+ match(Set dst (RShiftVI src shift));
+ ins_cost(INSN_COST * 2);
+ effect(TEMP_DEF dst);
+ format %{ "negr $dst,$shift\t"
+ "sshl $dst,$src,$dst\t# vector (4S)" %}
+ ins_encode %{
+ __ negr(as_FloatRegister($dst$$reg), __ T16B,
as_FloatRegister($shift$$reg));
__ sshl(as_FloatRegister($dst$$reg), __ T4S,
as_FloatRegister($src$$reg),
- as_FloatRegister($tmp$$reg));
+ as_FloatRegister($dst$$reg));
%}
ins_pipe(vshift128);
%}
-instruct vsrl2I(vecD dst, vecD src, vecD shift, vecD tmp) %{
- predicate(n->as_Vector()->length() == 2);
+instruct vsrl2I(vecD dst, vecD src, vecD shift) %{
+ predicate(n->as_Vector()->length() == 2 && !n->as_ShiftV()->is_var_shift());
match(Set dst (URShiftVI src shift));
ins_cost(INSN_COST);
- effect(TEMP tmp);
- format %{ "negr $tmp,$shift\t"
- "ushl $dst,$src,$tmp\t# vector (2S)" %}
+ format %{ "ushl $dst,$src,$shift\t# vector (2S)" %}
+ ins_encode %{
+ __ ushl(as_FloatRegister($dst$$reg), __ T2S,
+ as_FloatRegister($src$$reg),
+ as_FloatRegister($shift$$reg));
+ %}
+ ins_pipe(vshift64);
+%}
+
+instruct vsrl2I_var(vecD dst, vecD src, vecD shift) %{
+ predicate(n->as_Vector()->length() == 2 && n->as_ShiftV()->is_var_shift());
+ match(Set dst (URShiftVI src shift));
+ ins_cost(INSN_COST * 2);
+ effect(TEMP_DEF dst);
+ format %{ "negr $dst,$shift\t"
+ "ushl $dst,$src,$dst\t# vector (2S)" %}
ins_encode %{
- __ negr(as_FloatRegister($tmp$$reg), __ T8B,
+ __ negr(as_FloatRegister($dst$$reg), __ T8B,
as_FloatRegister($shift$$reg));
__ ushl(as_FloatRegister($dst$$reg), __ T2S,
as_FloatRegister($src$$reg),
- as_FloatRegister($tmp$$reg));
+ as_FloatRegister($dst$$reg));
%}
ins_pipe(vshift64);
%}
-instruct vsrl4I(vecX dst, vecX src, vecX shift, vecX tmp) %{
- predicate(n->as_Vector()->length() == 4);
+instruct vsrl4I(vecX dst, vecX src, vecX shift) %{
+ predicate(n->as_Vector()->length() == 4 && !n->as_ShiftV()->is_var_shift());
match(Set dst (URShiftVI src shift));
ins_cost(INSN_COST);
- effect(TEMP tmp);
- format %{ "negr $tmp,$shift\t"
- "ushl $dst,$src,$tmp\t# vector (4S)" %}
+ format %{ "ushl $dst,$src,$shift\t# vector (4S)" %}
ins_encode %{
- __ negr(as_FloatRegister($tmp$$reg), __ T16B,
+ __ ushl(as_FloatRegister($dst$$reg), __ T4S,
+ as_FloatRegister($src$$reg),
+ as_FloatRegister($shift$$reg));
+ %}
+ ins_pipe(vshift128);
+%}
+
+instruct vsrl4I_var(vecX dst, vecX src, vecX shift) %{
+ predicate(n->as_Vector()->length() == 4 && n->as_ShiftV()->is_var_shift());
+ match(Set dst (URShiftVI src shift));
+ ins_cost(INSN_COST * 2);
+ effect(TEMP_DEF dst);
+ format %{ "negr $dst,$shift\t"
+ "ushl $dst,$src,$dst\t# vector (4S)" %}
+ ins_encode %{
+ __ negr(as_FloatRegister($dst$$reg), __ T16B,
as_FloatRegister($shift$$reg));
__ ushl(as_FloatRegister($dst$$reg), __ T4S,
as_FloatRegister($src$$reg),
- as_FloatRegister($tmp$$reg));
+ as_FloatRegister($dst$$reg));
%}
ins_pipe(vshift128);
%}
instruct vsll2I_imm(vecD dst, vecD src, immI shift) %{
- predicate(n->as_Vector()->length() == 2);
+ predicate(n->as_Vector()->length() == 2 && assert_not_var_shift(n));
match(Set dst (LShiftVI src (LShiftCntV shift)));
ins_cost(INSN_COST);
- format %{ "shl $dst, $src, $shift\t# vector (2S)" %}
+ format %{ "shl $dst, $src, $shift\t# vector (2S)" %}
ins_encode %{
__ shl(as_FloatRegister($dst$$reg), __ T2S,
as_FloatRegister($src$$reg),
@@ -4960,10 +5157,10 @@ instruct vsll2I_imm(vecD dst, vecD src, immI shift) %{
%}
instruct vsll4I_imm(vecX dst, vecX src, immI shift) %{
- predicate(n->as_Vector()->length() == 4);
+ predicate(n->as_Vector()->length() == 4 && assert_not_var_shift(n));
match(Set dst (LShiftVI src (LShiftCntV shift)));
ins_cost(INSN_COST);
- format %{ "shl $dst, $src, $shift\t# vector (4S)" %}
+ format %{ "shl $dst, $src, $shift\t# vector (4S)" %}
ins_encode %{
__ shl(as_FloatRegister($dst$$reg), __ T4S,
as_FloatRegister($src$$reg),
@@ -4973,10 +5170,10 @@ instruct vsll4I_imm(vecX dst, vecX src, immI shift) %{
%}
instruct vsra2I_imm(vecD dst, vecD src, immI shift) %{
- predicate(n->as_Vector()->length() == 2);
+ predicate(n->as_Vector()->length() == 2 && assert_not_var_shift(n));
match(Set dst (RShiftVI src (RShiftCntV shift)));
ins_cost(INSN_COST);
- format %{ "sshr $dst, $src, $shift\t# vector (2S)" %}
+ format %{ "sshr $dst, $src, $shift\t# vector (2S)" %}
ins_encode %{
__ sshr(as_FloatRegister($dst$$reg), __ T2S,
as_FloatRegister($src$$reg),
@@ -4986,10 +5183,10 @@ instruct vsra2I_imm(vecD dst, vecD src, immI shift) %{
%}
instruct vsra4I_imm(vecX dst, vecX src, immI shift) %{
- predicate(n->as_Vector()->length() == 4);
+ predicate(n->as_Vector()->length() == 4 && assert_not_var_shift(n));
match(Set dst (RShiftVI src (RShiftCntV shift)));
ins_cost(INSN_COST);
- format %{ "sshr $dst, $src, $shift\t# vector (4S)" %}
+ format %{ "sshr $dst, $src, $shift\t# vector (4S)" %}
ins_encode %{
__ sshr(as_FloatRegister($dst$$reg), __ T4S,
as_FloatRegister($src$$reg),
@@ -4999,10 +5196,10 @@ instruct vsra4I_imm(vecX dst, vecX src, immI shift) %{
%}
instruct vsrl2I_imm(vecD dst, vecD src, immI shift) %{
- predicate(n->as_Vector()->length() == 2);
+ predicate(n->as_Vector()->length() == 2 && assert_not_var_shift(n));
match(Set dst (URShiftVI src (RShiftCntV shift)));
ins_cost(INSN_COST);
- format %{ "ushr $dst, $src, $shift\t# vector (2S)" %}
+ format %{ "ushr $dst, $src, $shift\t# vector (2S)" %}
ins_encode %{
__ ushr(as_FloatRegister($dst$$reg), __ T2S,
as_FloatRegister($src$$reg),
@@ -5012,10 +5209,10 @@ instruct vsrl2I_imm(vecD dst, vecD src, immI shift) %{
%}
instruct vsrl4I_imm(vecX dst, vecX src, immI shift) %{
- predicate(n->as_Vector()->length() == 4);
+ predicate(n->as_Vector()->length() == 4 && assert_not_var_shift(n));
match(Set dst (URShiftVI src (RShiftCntV shift)));
ins_cost(INSN_COST);
- format %{ "ushr $dst, $src, $shift\t# vector (4S)" %}
+ format %{ "ushr $dst, $src, $shift\t# vector (4S)" %}
ins_encode %{
__ ushr(as_FloatRegister($dst$$reg), __ T4S,
as_FloatRegister($src$$reg),
@@ -5037,45 +5234,71 @@ instruct vsll2L(vecX dst, vecX src, vecX shift) %{
ins_pipe(vshift128);
%}
-instruct vsra2L(vecX dst, vecX src, vecX shift, vecX tmp) %{
- predicate(n->as_Vector()->length() == 2);
+instruct vsra2L(vecX dst, vecX src, vecX shift) %{
+ predicate(n->as_Vector()->length() == 2 && !n->as_ShiftV()->is_var_shift());
match(Set dst (RShiftVL src shift));
ins_cost(INSN_COST);
- effect(TEMP tmp);
- format %{ "negr $tmp,$shift\t"
- "sshl $dst,$src,$tmp\t# vector (2D)" %}
+ format %{ "sshl $dst,$src,$shift\t# vector (2D)" %}
ins_encode %{
- __ negr(as_FloatRegister($tmp$$reg), __ T16B,
+ __ sshl(as_FloatRegister($dst$$reg), __ T2D,
+ as_FloatRegister($src$$reg),
+ as_FloatRegister($shift$$reg));
+ %}
+ ins_pipe(vshift128);
+%}
+
+instruct vsra2L_var(vecX dst, vecX src, vecX shift) %{
+ predicate(n->as_Vector()->length() == 2 && n->as_ShiftV()->is_var_shift());
+ match(Set dst (RShiftVL src shift));
+ ins_cost(INSN_COST * 2);
+ effect(TEMP_DEF dst);
+ format %{ "negr $dst,$shift\t"
+ "sshl $dst,$src,$dst\t# vector (2D)" %}
+ ins_encode %{
+ __ negr(as_FloatRegister($dst$$reg), __ T16B,
as_FloatRegister($shift$$reg));
__ sshl(as_FloatRegister($dst$$reg), __ T2D,
as_FloatRegister($src$$reg),
- as_FloatRegister($tmp$$reg));
+ as_FloatRegister($dst$$reg));
%}
ins_pipe(vshift128);
%}
-instruct vsrl2L(vecX dst, vecX src, vecX shift, vecX tmp) %{
- predicate(n->as_Vector()->length() == 2);
+instruct vsrl2L(vecX dst, vecX src, vecX shift) %{
+ predicate(n->as_Vector()->length() == 2 && !n->as_ShiftV()->is_var_shift());
match(Set dst (URShiftVL src shift));
ins_cost(INSN_COST);
- effect(TEMP tmp);
- format %{ "negr $tmp,$shift\t"
- "ushl $dst,$src,$tmp\t# vector (2D)" %}
+ format %{ "ushl $dst,$src,$shift\t# vector (2D)" %}
+ ins_encode %{
+ __ ushl(as_FloatRegister($dst$$reg), __ T2D,
+ as_FloatRegister($src$$reg),
+ as_FloatRegister($shift$$reg));
+ %}
+ ins_pipe(vshift128);
+%}
+
+instruct vsrl2L_var(vecX dst, vecX src, vecX shift) %{
+ predicate(n->as_Vector()->length() == 2 && n->as_ShiftV()->is_var_shift());
+ match(Set dst (URShiftVL src shift));
+ ins_cost(INSN_COST * 2);
+ effect(TEMP_DEF dst);
+ format %{ "negr $dst,$shift\t"
+ "ushl $dst,$src,$dst\t# vector (2D)" %}
ins_encode %{
- __ negr(as_FloatRegister($tmp$$reg), __ T16B,
+ __ negr(as_FloatRegister($dst$$reg), __ T16B,
as_FloatRegister($shift$$reg));
__ ushl(as_FloatRegister($dst$$reg), __ T2D,
as_FloatRegister($src$$reg),
- as_FloatRegister($tmp$$reg));
+ as_FloatRegister($dst$$reg));
%}
ins_pipe(vshift128);
%}
instruct vsll2L_imm(vecX dst, vecX src, immI shift) %{
- predicate(n->as_Vector()->length() == 2);
+ predicate(n->as_Vector()->length() == 2 && assert_not_var_shift(n));
match(Set dst (LShiftVL src (LShiftCntV shift)));
ins_cost(INSN_COST);
- format %{ "shl $dst, $src, $shift\t# vector (2D)" %}
+ format %{ "shl $dst, $src, $shift\t# vector (2D)" %}
ins_encode %{
__ shl(as_FloatRegister($dst$$reg), __ T2D,
as_FloatRegister($src$$reg),
@@ -5085,10 +5308,10 @@ instruct vsll2L_imm(vecX dst, vecX src, immI shift) %{
%}
instruct vsra2L_imm(vecX dst, vecX src, immI shift) %{
- predicate(n->as_Vector()->length() == 2);
+ predicate(n->as_Vector()->length() == 2 && assert_not_var_shift(n));
match(Set dst (RShiftVL src (RShiftCntV shift)));
ins_cost(INSN_COST);
- format %{ "sshr $dst, $src, $shift\t# vector (2D)" %}
+ format %{ "sshr $dst, $src, $shift\t# vector (2D)" %}
ins_encode %{
__ sshr(as_FloatRegister($dst$$reg), __ T2D,
as_FloatRegister($src$$reg),
@@ -5098,10 +5321,10 @@ instruct vsra2L_imm(vecX dst, vecX src, immI shift) %{
%}
instruct vsrl2L_imm(vecX dst, vecX src, immI shift) %{
- predicate(n->as_Vector()->length() == 2);
+ predicate(n->as_Vector()->length() == 2 && assert_not_var_shift(n));
match(Set dst (URShiftVL src (RShiftCntV shift)));
ins_cost(INSN_COST);
- format %{ "ushr $dst, $src, $shift\t# vector (2D)" %}
+ format %{ "ushr $dst, $src, $shift\t# vector (2D)" %}
ins_encode %{
__ ushr(as_FloatRegister($dst$$reg), __ T2D,
as_FloatRegister($src$$reg),
@@ -5114,12 +5337,12 @@ instruct vsraa8B_imm(vecD dst, vecD src, immI shift) %{
predicate(n->as_Vector()->length() == 8);
match(Set dst (AddVB dst (RShiftVB src (RShiftCntV shift))));
ins_cost(INSN_COST);
- format %{ "ssra $dst, $src, $shift\t# vector (8B)" %}
+ format %{ "ssra $dst, $src, $shift\t# vector (8B)" %}
ins_encode %{
int sh = (int)$shift$$constant;
if (sh >= 8) sh = 7;
__ ssra(as_FloatRegister($dst$$reg), __ T8B,
- as_FloatRegister($src$$reg), sh);
+ as_FloatRegister($src$$reg), sh);
%}
ins_pipe(vshift64_imm);
%}
@@ -5128,12 +5351,12 @@ instruct vsraa16B_imm(vecX dst, vecX src, immI shift) %{
predicate(n->as_Vector()->length() == 16);
match(Set dst (AddVB dst (RShiftVB src (RShiftCntV shift))));
ins_cost(INSN_COST);
- format %{ "ssra $dst, $src, $shift\t# vector (16B)" %}
+ format %{ "ssra $dst, $src, $shift\t# vector (16B)" %}
ins_encode %{
int sh = (int)$shift$$constant;
if (sh >= 8) sh = 7;
__ ssra(as_FloatRegister($dst$$reg), __ T16B,
- as_FloatRegister($src$$reg), sh);
+ as_FloatRegister($src$$reg), sh);
%}
ins_pipe(vshift128_imm);
%}
@@ -5142,12 +5365,12 @@ instruct vsraa4S_imm(vecD dst, vecD src, immI shift) %{
predicate(n->as_Vector()->length() == 4);
match(Set dst (AddVS dst (RShiftVS src (RShiftCntV shift))));
ins_cost(INSN_COST);
- format %{ "ssra $dst, $src, $shift\t# vector (4H)" %}
+ format %{ "ssra $dst, $src, $shift\t# vector (4H)" %}
ins_encode %{
int sh = (int)$shift$$constant;
if (sh >= 16) sh = 15;
__ ssra(as_FloatRegister($dst$$reg), __ T4H,
- as_FloatRegister($src$$reg), sh);
+ as_FloatRegister($src$$reg), sh);
%}
ins_pipe(vshift64_imm);
%}
@@ -5156,12 +5379,12 @@ instruct vsraa8S_imm(vecX dst, vecX src, immI shift) %{
predicate(n->as_Vector()->length() == 8);
match(Set dst (AddVS dst (RShiftVS src (RShiftCntV shift))));
ins_cost(INSN_COST);
- format %{ "ssra $dst, $src, $shift\t# vector (8H)" %}
+ format %{ "ssra $dst, $src, $shift\t# vector (8H)" %}
ins_encode %{
int sh = (int)$shift$$constant;
if (sh >= 16) sh = 15;
__ ssra(as_FloatRegister($dst$$reg), __ T8H,
- as_FloatRegister($src$$reg), sh);
+ as_FloatRegister($src$$reg), sh);
%}
ins_pipe(vshift128_imm);
%}
@@ -5170,7 +5393,7 @@ instruct vsraa2I_imm(vecD dst, vecD src, immI shift) %{
predicate(n->as_Vector()->length() == 2);
match(Set dst (AddVI dst (RShiftVI src (RShiftCntV shift))));
ins_cost(INSN_COST);
- format %{ "ssra $dst, $src, $shift\t# vector (2S)" %}
+ format %{ "ssra $dst, $src, $shift\t# vector (2S)" %}
ins_encode %{
__ ssra(as_FloatRegister($dst$$reg), __ T2S,
as_FloatRegister($src$$reg),
@@ -5183,7 +5406,7 @@ instruct vsraa4I_imm(vecX dst, vecX src, immI shift) %{
predicate(n->as_Vector()->length() == 4);
match(Set dst (AddVI dst (RShiftVI src (RShiftCntV shift))));
ins_cost(INSN_COST);
- format %{ "ssra $dst, $src, $shift\t# vector (4S)" %}
+ format %{ "ssra $dst, $src, $shift\t# vector (4S)" %}
ins_encode %{
__ ssra(as_FloatRegister($dst$$reg), __ T4S,
as_FloatRegister($src$$reg),
@@ -5196,7 +5419,7 @@ instruct vsraa2L_imm(vecX dst, vecX src, immI shift) %{
predicate(n->as_Vector()->length() == 2);
match(Set dst (AddVL dst (RShiftVL src (RShiftCntV shift))));
ins_cost(INSN_COST);
- format %{ "ssra $dst, $src, $shift\t# vector (2D)" %}
+ format %{ "ssra $dst, $src, $shift\t# vector (2D)" %}
ins_encode %{
__ ssra(as_FloatRegister($dst$$reg), __ T2D,
as_FloatRegister($src$$reg),
@@ -5209,12 +5432,12 @@ instruct vsrla8B_imm(vecD dst, vecD src, immI shift) %{
predicate(n->as_Vector()->length() == 8);
match(Set dst (AddVB dst (URShiftVB src (RShiftCntV shift))));
ins_cost(INSN_COST);
- format %{ "usra $dst, $src, $shift\t# vector (8B)" %}
+ format %{ "usra $dst, $src, $shift\t# vector (8B)" %}
ins_encode %{
int sh = (int)$shift$$constant;
if (sh < 8) {
__ usra(as_FloatRegister($dst$$reg), __ T8B,
- as_FloatRegister($src$$reg), sh);
+ as_FloatRegister($src$$reg), sh);
}
%}
ins_pipe(vshift64_imm);
@@ -5224,12 +5447,12 @@ instruct vsrla16B_imm(vecX dst, vecX src, immI shift) %{
predicate(n->as_Vector()->length() == 16);
match(Set dst (AddVB dst (URShiftVB src (RShiftCntV shift))));
ins_cost(INSN_COST);
- format %{ "usra $dst, $src, $shift\t# vector (16B)" %}
+ format %{ "usra $dst, $src, $shift\t# vector (16B)" %}
ins_encode %{
int sh = (int)$shift$$constant;
if (sh < 8) {
__ usra(as_FloatRegister($dst$$reg), __ T16B,
- as_FloatRegister($src$$reg), sh);
+ as_FloatRegister($src$$reg), sh);
}
%}
ins_pipe(vshift128_imm);
@@ -5239,12 +5462,12 @@ instruct vsrla4S_imm(vecD dst, vecD src, immI shift) %{
predicate(n->as_Vector()->length() == 4);
match(Set dst (AddVS dst (URShiftVS src (RShiftCntV shift))));
ins_cost(INSN_COST);
- format %{ "usra $dst, $src, $shift\t# vector (4H)" %}
+ format %{ "usra $dst, $src, $shift\t# vector (4H)" %}
ins_encode %{
int sh = (int)$shift$$constant;
if (sh < 16) {
__ usra(as_FloatRegister($dst$$reg), __ T4H,
- as_FloatRegister($src$$reg), sh);
+ as_FloatRegister($src$$reg), sh);
}
%}
ins_pipe(vshift64_imm);
@@ -5254,12 +5477,12 @@ instruct vsrla8S_imm(vecX dst, vecX src, immI shift) %{
predicate(n->as_Vector()->length() == 8);
match(Set dst (AddVS dst (URShiftVS src (RShiftCntV shift))));
ins_cost(INSN_COST);
- format %{ "usra $dst, $src, $shift\t# vector (8H)" %}
+ format %{ "usra $dst, $src, $shift\t# vector (8H)" %}
ins_encode %{
int sh = (int)$shift$$constant;
if (sh < 16) {
__ usra(as_FloatRegister($dst$$reg), __ T8H,
- as_FloatRegister($src$$reg), sh);
+ as_FloatRegister($src$$reg), sh);
}
%}
ins_pipe(vshift128_imm);
@@ -5269,7 +5492,7 @@ instruct vsrla2I_imm(vecD dst, vecD src, immI shift) %{
predicate(n->as_Vector()->length() == 2);
match(Set dst (AddVI dst (URShiftVI src (RShiftCntV shift))));
ins_cost(INSN_COST);
- format %{ "usra $dst, $src, $shift\t# vector (2S)" %}
+ format %{ "usra $dst, $src, $shift\t# vector (2S)" %}
ins_encode %{
__ usra(as_FloatRegister($dst$$reg), __ T2S,
as_FloatRegister($src$$reg),
@@ -5282,7 +5505,7 @@ instruct vsrla4I_imm(vecX dst, vecX src, immI shift) %{
predicate(n->as_Vector()->length() == 4);
match(Set dst (AddVI dst (URShiftVI src (RShiftCntV shift))));
ins_cost(INSN_COST);
- format %{ "usra $dst, $src, $shift\t# vector (4S)" %}
+ format %{ "usra $dst, $src, $shift\t# vector (4S)" %}
ins_encode %{
__ usra(as_FloatRegister($dst$$reg), __ T4S,
as_FloatRegister($src$$reg),
@@ -5295,7 +5518,7 @@ instruct vsrla2L_imm(vecX dst, vecX src, immI shift) %{
predicate(n->as_Vector()->length() == 2);
match(Set dst (AddVL dst (URShiftVL src (RShiftCntV shift))));
ins_cost(INSN_COST);
- format %{ "usra $dst, $src, $shift\t# vector (2D)" %}
+ format %{ "usra $dst, $src, $shift\t# vector (2D)" %}
ins_encode %{
__ usra(as_FloatRegister($dst$$reg), __ T2D,
as_FloatRegister($src$$reg),
diff --git a/src/hotspot/cpu/aarch64/aarch64_neon_ad.m4 b/src/hotspot/cpu/aarch64/aarch64_neon_ad.m4
index ff94bb002fafc0d6c831a9fea239a289dcec482d..f98ddf4ee3655f91d45b07f2227b28ed8ae214eb 100644
--- a/src/hotspot/cpu/aarch64/aarch64_neon_ad.m4
+++ b/src/hotspot/cpu/aarch64/aarch64_neon_ad.m4
@@ -1,5 +1,5 @@
-// Copyright (c) 2020, 2021, Oracle and/or its affiliates. All rights reserved.
-// Copyright (c) 2020, 2021, Arm Limited. All rights reserved.
+// Copyright (c) 2020, 2022, Oracle and/or its affiliates. All rights reserved.
+// Copyright (c) 2020, 2022, Arm Limited. All rights reserved.
// DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
//
// This code is free software; you can redistribute it and/or modify it
@@ -1972,223 +1972,277 @@ VLOGICAL(xor, eor, xor, Xor, 16, B, X)
// ------------------------------ Shift ---------------------------------------
dnl
-define(`VSHIFTCNT', `
-instruct vshiftcnt$3$4`'(vec$5 dst, iRegIorL2I cnt) %{
- predicate(UseSVE == 0 && (ifelse($3, 8, n->as_Vector()->length_in_bytes() == 4 ||`
- ')n->as_Vector()->length_in_bytes() == $3));
+define(`VSLCNT', `
+instruct vslcnt$1$2`'(vec$3 dst, iRegIorL2I cnt) %{
+ predicate(UseSVE == 0 && ifelse($1, 8,
+ (n->as_Vector()->length_in_bytes() == 4 ||`
+ 'n->as_Vector()->length_in_bytes() == $1),
+ n->as_Vector()->length_in_bytes() == $1));
match(Set dst (LShiftCntV cnt));
- match(Set dst (RShiftCntV cnt));
- format %{ "$1 $dst, $cnt\t# shift count vector ($3$4)" %}
+ ins_cost(INSN_COST);
+ format %{ "dup $dst, $cnt\t# shift count vector ($1$2)" %}
ins_encode %{
- __ $2(as_FloatRegister($dst$$reg), __ T$3$4, as_Register($cnt$$reg));
+ __ dup(as_FloatRegister($dst$$reg), __ T$1$2, as_Register($cnt$$reg));
%}
- ins_pipe(vdup_reg_reg`'ifelse($5, D, 64, 128));
+ ins_pipe(vdup_reg_reg`'ifelse($3, D, 64, 128));
%}')dnl
-dnl $1 $2 $3 $4 $5
-VSHIFTCNT(dup, dup, 8, B, D)
-VSHIFTCNT(dup, dup, 16, B, X)
+dnl
+define(`VSRCNT', `
+instruct vsrcnt$1$2`'(vec$3 dst, iRegIorL2I cnt) %{
+ predicate(UseSVE == 0 && ifelse($1, 8,
+ (n->as_Vector()->length_in_bytes() == 4 ||`
+ 'n->as_Vector()->length_in_bytes() == $1),
+ n->as_Vector()->length_in_bytes() == $1));
+ match(Set dst (RShiftCntV cnt));
+ ins_cost(INSN_COST * 2);
+ format %{ "negw rscratch1, $cnt\t"
+ "dup $dst, rscratch1\t# shift count vector ($1$2)" %}
+ ins_encode %{
+ __ negw(rscratch1, as_Register($cnt$$reg));
+ __ dup(as_FloatRegister($dst$$reg), __ T$1$2, rscratch1);
+ %}
+ ins_pipe(vdup_reg_reg`'ifelse($3, D, 64, 128));
+%}')dnl
+dnl
+
+// Vector shift count
+// Note-1: Low 8 bits of each element are used, so it doesn't matter if we
+// treat it as ints or bytes here.
+// Note-2: Shift value is negated for RShiftCntV additionally. See the comments
+// on vsra8B rule for more details.
+dnl $1 $2 $3
+VSLCNT(8, B, D)
+VSLCNT(16, B, X)
+VSRCNT(8, B, D)
+VSRCNT(16, B, X)
+dnl
+define(`PREDICATE',
+`ifelse($1, 8B,
+ ifelse($3, `', `predicate(n->as_Vector()->length() == 4 || n->as_Vector()->length() == 8);',
+ `predicate((n->as_Vector()->length() == 4 || n->as_Vector()->length() == 8) &&`
+ '$3);'),
+ $1, 4S,
+ ifelse($3, `', `predicate(n->as_Vector()->length() == 2 || n->as_Vector()->length() == 4);',
+ `predicate((n->as_Vector()->length() == 2 || n->as_Vector()->length() == 4) &&`
+ '$3);'),
+ ifelse($3, `', `predicate(n->as_Vector()->length() == $2);',
+ `predicate(n->as_Vector()->length() == $2 && $3);'))')dnl
dnl
define(`VSLL', `
-instruct vsll$3$4`'(vec$6 dst, vec$6 src, vec$6 shift) %{
- predicate(ifelse($3$4, 8B, n->as_Vector()->length() == 4 ||`
- ',
- $3$4, 4S, n->as_Vector()->length() == 2 ||`
- ')n->as_Vector()->length() == $3);
- match(Set dst (LShiftV$4 src shift));
+instruct vsll$1$2`'(vec$4 dst, vec$4 src, vec$4 shift) %{
+ PREDICATE(`$1$2', $1, )
+ match(Set dst (LShiftV$2 src shift));
ins_cost(INSN_COST);
- format %{ "$1 $dst,$src,$shift\t# vector ($3$5)" %}
+ format %{ "sshl $dst,$src,$shift\t# vector ($1$3)" %}
ins_encode %{
- __ $2(as_FloatRegister($dst$$reg), __ T$3$5,
+ __ sshl(as_FloatRegister($dst$$reg), __ T$1$3,
as_FloatRegister($src$$reg),
as_FloatRegister($shift$$reg));
%}
- ins_pipe(vshift`'ifelse($6, D, 64, 128));
+ ins_pipe(vshift`'ifelse($4, D, 64, 128));
%}')dnl
dnl
define(`VSRA', `
-instruct vsra$3$4`'(vec$6 dst, vec$6 src, vec$6 shift, vec$6 tmp) %{
- predicate(ifelse($3$4, 8B, n->as_Vector()->length() == 4 ||`
- ',
- $3$4, 4S, n->as_Vector()->length() == 2 ||`
- ')n->as_Vector()->length() == $3);
- match(Set dst (RShiftV$4 src shift));
+instruct vsra$1$2`'(vec$4 dst, vec$4 src, vec$4 shift) %{
+ PREDICATE(`$1$2', $1, !n->as_ShiftV()->is_var_shift())
+ match(Set dst (RShiftV$2 src shift));
ins_cost(INSN_COST);
- effect(TEMP tmp);
- format %{ "$1 $tmp,$shift\t"
- "$2 $dst,$src,$tmp\t# vector ($3$5)" %}
+ format %{ "sshl $dst,$src,$shift\t# vector ($1$3)" %}
ins_encode %{
- __ $1(as_FloatRegister($tmp$$reg), __ T`'ifelse($6, D, 8B, 16B),
+ __ sshl(as_FloatRegister($dst$$reg), __ T$1$3,
+ as_FloatRegister($src$$reg),
as_FloatRegister($shift$$reg));
- __ $2(as_FloatRegister($dst$$reg), __ T$3$5,
+ %}
+ ins_pipe(vshift`'ifelse($4, D, 64, 128));
+%}')dnl
+dnl
+define(`VSRA_VAR', `
+instruct vsra$1$2_var`'(vec$4 dst, vec$4 src, vec$4 shift) %{
+ PREDICATE(`$1$2', $1, n->as_ShiftV()->is_var_shift())
+ match(Set dst (RShiftV$2 src shift));
+ ins_cost(INSN_COST * 2);
+ effect(TEMP_DEF dst);
+ format %{ "negr $dst,$shift\t"
+ "sshl $dst,$src,$dst\t# vector ($1$3)" %}
+ ins_encode %{
+ __ negr(as_FloatRegister($dst$$reg), __ T`'ifelse($4, D, 8B, 16B),
+ as_FloatRegister($shift$$reg));
+ __ sshl(as_FloatRegister($dst$$reg), __ T$1$3,
as_FloatRegister($src$$reg),
- as_FloatRegister($tmp$$reg));
+ as_FloatRegister($dst$$reg));
%}
- ins_pipe(vshift`'ifelse($6, D, 64, 128));
+ ins_pipe(vshift`'ifelse($4, D, 64, 128));
%}')dnl
dnl
define(`VSRL', `
-instruct vsrl$3$4`'(vec$6 dst, vec$6 src, vec$6 shift, vec$6 tmp) %{
- predicate(ifelse($3$4, 8B, n->as_Vector()->length() == 4 ||`
- ',
- $3$4, 4S, n->as_Vector()->length() == 2 ||`
- ')n->as_Vector()->length() == $3);
- match(Set dst (URShiftV$4 src shift));
+instruct vsrl$1$2`'(vec$4 dst, vec$4 src, vec$4 shift) %{
+ PREDICATE(`$1$2', $1, !n->as_ShiftV()->is_var_shift())
+ match(Set dst (URShiftV$2 src shift));
ins_cost(INSN_COST);
- effect(TEMP tmp);
- format %{ "$1 $tmp,$shift\t"
- "$2 $dst,$src,$tmp\t# vector ($3$5)" %}
+ format %{ "ushl $dst,$src,$shift\t# vector ($1$3)" %}
+ ins_encode %{
+ __ ushl(as_FloatRegister($dst$$reg), __ T$1$3,
+ as_FloatRegister($src$$reg),
+ as_FloatRegister($shift$$reg));
+ %}
+ ins_pipe(vshift`'ifelse($4, D, 64, 128));
+%}')dnl
+dnl
+define(`VSRL_VAR', `
+instruct vsrl$1$2_var`'(vec$4 dst, vec$4 src, vec$4 shift) %{
+ PREDICATE(`$1$2', $1, n->as_ShiftV()->is_var_shift())
+ match(Set dst (URShiftV$2 src shift));
+ ins_cost(INSN_COST * 2);
+ effect(TEMP_DEF dst);
+ format %{ "negr $dst,$shift\t"
+ "ushl $dst,$src,$dst\t# vector ($1$3)" %}
ins_encode %{
- __ $1(as_FloatRegister($tmp$$reg), __ T`'ifelse($6, D, 8B, 16B),
+ __ negr(as_FloatRegister($dst$$reg), __ T`'ifelse($4, D, 8B, 16B),
as_FloatRegister($shift$$reg));
- __ $2(as_FloatRegister($dst$$reg), __ T$3$5,
+ __ ushl(as_FloatRegister($dst$$reg), __ T$1$3,
as_FloatRegister($src$$reg),
- as_FloatRegister($tmp$$reg));
+ as_FloatRegister($dst$$reg));
%}
- ins_pipe(vshift`'ifelse($6, D, 64, 128));
+ ins_pipe(vshift`'ifelse($4, D, 64, 128));
%}')dnl
dnl
define(`VSLL_IMM', `
-instruct vsll$3$4_imm`'(vec$6 dst, vec$6 src, immI shift) %{
- predicate(ifelse($3$4, 8B, n->as_Vector()->length() == 4 ||`
- ',
- $3$4, 4S, n->as_Vector()->length() == 2 ||`
- ')n->as_Vector()->length() == $3);
- match(Set dst (LShiftV$4 src (LShiftCntV shift)));
- ins_cost(INSN_COST);
- format %{ "$1 $dst, $src, $shift\t# vector ($3$5)" %}
- ins_encode %{ifelse($4, B,`
+instruct vsll$1$2_imm`'(vec$4 dst, vec$4 src, immI shift) %{
+ PREDICATE(`$1$2', $1, assert_not_var_shift(n))
+ match(Set dst (LShiftV$2 src (LShiftCntV shift)));
+ ins_cost(INSN_COST);
+ format %{ "shl $dst, $src, $shift\t# vector ($1$3)" %}
+ ins_encode %{ifelse($2, B,`
int sh = (int)$shift$$constant;
if (sh >= 8) {
- __ eor(as_FloatRegister($dst$$reg), __ ifelse($6, D, T8B, T16B),
+ __ eor(as_FloatRegister($dst$$reg), __ ifelse($4, D, T8B, T16B),
as_FloatRegister($src$$reg),
as_FloatRegister($src$$reg));
} else {
- __ $2(as_FloatRegister($dst$$reg), __ T$3$5,
+ __ shl(as_FloatRegister($dst$$reg), __ T$1$3,
as_FloatRegister($src$$reg), sh);
- }', $4, S,`
+ }', $2, S,`
int sh = (int)$shift$$constant;
if (sh >= 16) {
- __ eor(as_FloatRegister($dst$$reg), __ ifelse($6, D, T8B, T16B),
+ __ eor(as_FloatRegister($dst$$reg), __ ifelse($4, D, T8B, T16B),
as_FloatRegister($src$$reg),
as_FloatRegister($src$$reg));
} else {
- __ $2(as_FloatRegister($dst$$reg), __ T$3$5,
+ __ shl(as_FloatRegister($dst$$reg), __ T$1$3,
as_FloatRegister($src$$reg), sh);
}', `
- __ $2(as_FloatRegister($dst$$reg), __ T$3$5,
+ __ shl(as_FloatRegister($dst$$reg), __ T$1$3,
as_FloatRegister($src$$reg),
(int)$shift$$constant);')
%}
- ins_pipe(vshift`'ifelse($6, D, 64, 128)_imm);
+ ins_pipe(vshift`'ifelse($4, D, 64, 128)_imm);
%}')dnl
+dnl
define(`VSRA_IMM', `
-instruct vsra$3$4_imm`'(vec$6 dst, vec$6 src, immI shift) %{
- predicate(ifelse($3$4, 8B, n->as_Vector()->length() == 4 ||`
- ',
- $3$4, 4S, n->as_Vector()->length() == 2 ||`
- ')n->as_Vector()->length() == $3);
- match(Set dst (RShiftV$4 src (RShiftCntV shift)));
- ins_cost(INSN_COST);
- format %{ "$1 $dst, $src, $shift\t# vector ($3$5)" %}
- ins_encode %{ifelse($4, B,`
+instruct vsra$1$2_imm`'(vec$4 dst, vec$4 src, immI shift) %{
+ PREDICATE(`$1$2', $1, assert_not_var_shift(n))
+ match(Set dst (RShiftV$2 src (RShiftCntV shift)));
+ ins_cost(INSN_COST);
+ format %{ "sshr $dst, $src, $shift\t# vector ($1$3)" %}
+ ins_encode %{ifelse($2, B,`
int sh = (int)$shift$$constant;
if (sh >= 8) sh = 7;
- __ $2(as_FloatRegister($dst$$reg), __ T$3$5,
- as_FloatRegister($src$$reg), sh);', $4, S,`
+ __ sshr(as_FloatRegister($dst$$reg), __ T$1$3,
+ as_FloatRegister($src$$reg), sh);', $2, S,`
int sh = (int)$shift$$constant;
if (sh >= 16) sh = 15;
- __ $2(as_FloatRegister($dst$$reg), __ T$3$5,
- as_FloatRegister($src$$reg), sh);', `
- __ $2(as_FloatRegister($dst$$reg), __ T$3$5,
+ __ sshr(as_FloatRegister($dst$$reg), __ T$1$3,
+ as_FloatRegister($src$$reg), sh);', `
+ __ sshr(as_FloatRegister($dst$$reg), __ T$1$3,
as_FloatRegister($src$$reg),
(int)$shift$$constant);')
%}
- ins_pipe(vshift`'ifelse($6, D, 64, 128)_imm);
+ ins_pipe(vshift`'ifelse($4, D, 64, 128)_imm);
%}')dnl
dnl
define(`VSRL_IMM', `
-instruct vsrl$3$4_imm`'(vec$6 dst, vec$6 src, immI shift) %{
- predicate(ifelse($3$4, 8B, n->as_Vector()->length() == 4 ||`
- ',
- $3$4, 4S, n->as_Vector()->length() == 2 ||`
- ')n->as_Vector()->length() == $3);
- match(Set dst (URShiftV$4 src (RShiftCntV shift)));
- ins_cost(INSN_COST);
- format %{ "$1 $dst, $src, $shift\t# vector ($3$5)" %}
- ins_encode %{ifelse($4, B,`
+instruct vsrl$1$2_imm`'(vec$4 dst, vec$4 src, immI shift) %{
+ PREDICATE(`$1$2', $1, assert_not_var_shift(n))
+ match(Set dst (URShiftV$2 src (RShiftCntV shift)));
+ ins_cost(INSN_COST);
+ format %{ "ushr $dst, $src, $shift\t# vector ($1$3)" %}
+ ins_encode %{ifelse($2, B,`
int sh = (int)$shift$$constant;
if (sh >= 8) {
- __ eor(as_FloatRegister($dst$$reg), __ ifelse($6, D, T8B, T16B),
+ __ eor(as_FloatRegister($dst$$reg), __ ifelse($4, D, T8B, T16B),
as_FloatRegister($src$$reg),
as_FloatRegister($src$$reg));
} else {
- __ $2(as_FloatRegister($dst$$reg), __ T$3$5,
- as_FloatRegister($src$$reg), sh);
- }', $4, S,`
+ __ ushr(as_FloatRegister($dst$$reg), __ T$1$3,
+ as_FloatRegister($src$$reg), sh);
+ }', $2, S,`
int sh = (int)$shift$$constant;
if (sh >= 16) {
- __ eor(as_FloatRegister($dst$$reg), __ ifelse($6, D, T8B, T16B),
+ __ eor(as_FloatRegister($dst$$reg), __ ifelse($4, D, T8B, T16B),
as_FloatRegister($src$$reg),
as_FloatRegister($src$$reg));
} else {
- __ $2(as_FloatRegister($dst$$reg), __ T$3$5,
- as_FloatRegister($src$$reg), sh);
+ __ ushr(as_FloatRegister($dst$$reg), __ T$1$3,
+ as_FloatRegister($src$$reg), sh);
}', `
- __ $2(as_FloatRegister($dst$$reg), __ T$3$5,
+ __ ushr(as_FloatRegister($dst$$reg), __ T$1$3,
as_FloatRegister($src$$reg),
(int)$shift$$constant);')
%}
- ins_pipe(vshift`'ifelse($6, D, 64, 128)_imm);
+ ins_pipe(vshift`'ifelse($4, D, 64, 128)_imm);
%}')dnl
dnl
define(`VSRLA_IMM', `
-instruct vsrla$3$4_imm`'(vec$6 dst, vec$6 src, immI shift) %{
- predicate(n->as_Vector()->length() == $3);
- match(Set dst (AddV$4 dst (URShiftV$4 src (RShiftCntV shift))));
+instruct vsrla$1$2_imm`'(vec$4 dst, vec$4 src, immI shift) %{
+ predicate(n->as_Vector()->length() == $1);
+ match(Set dst (AddV$2 dst (URShiftV$2 src (RShiftCntV shift))));
ins_cost(INSN_COST);
- format %{ "$1 $dst, $src, $shift\t# vector ($3$5)" %}
- ins_encode %{ifelse($4, B,`
+ format %{ "usra $dst, $src, $shift\t# vector ($1$3)" %}
+ ins_encode %{ifelse($2, B,`
int sh = (int)$shift$$constant;
if (sh < 8) {
- __ $2(as_FloatRegister($dst$$reg), __ T$3$5,
- as_FloatRegister($src$$reg), sh);
- }', $4, S,`
+ __ usra(as_FloatRegister($dst$$reg), __ T$1$3,
+ as_FloatRegister($src$$reg), sh);
+ }', $2, S,`
int sh = (int)$shift$$constant;
if (sh < 16) {
- __ $2(as_FloatRegister($dst$$reg), __ T$3$5,
- as_FloatRegister($src$$reg), sh);
+ __ usra(as_FloatRegister($dst$$reg), __ T$1$3,
+ as_FloatRegister($src$$reg), sh);
}', `
- __ $2(as_FloatRegister($dst$$reg), __ T$3$5,
+ __ usra(as_FloatRegister($dst$$reg), __ T$1$3,
as_FloatRegister($src$$reg),
(int)$shift$$constant);')
%}
- ins_pipe(vshift`'ifelse($6, D, 64, 128)_imm);
+ ins_pipe(vshift`'ifelse($4, D, 64, 128)_imm);
%}')dnl
dnl
define(`VSRAA_IMM', `
-instruct vsraa$3$4_imm`'(vec$6 dst, vec$6 src, immI shift) %{
- predicate(n->as_Vector()->length() == $3);
- match(Set dst (AddV$4 dst (RShiftV$4 src (RShiftCntV shift))));
+instruct vsraa$1$2_imm`'(vec$4 dst, vec$4 src, immI shift) %{
+ predicate(n->as_Vector()->length() == $1);
+ match(Set dst (AddV$2 dst (RShiftV$2 src (RShiftCntV shift))));
ins_cost(INSN_COST);
- format %{ "$1 $dst, $src, $shift\t# vector ($3$5)" %}
- ins_encode %{ifelse($4, B,`
+ format %{ "ssra $dst, $src, $shift\t# vector ($1$3)" %}
+ ins_encode %{ifelse($2, B,`
int sh = (int)$shift$$constant;
if (sh >= 8) sh = 7;
- __ $2(as_FloatRegister($dst$$reg), __ T$3$5,
- as_FloatRegister($src$$reg), sh);', $4, S,`
+ __ ssra(as_FloatRegister($dst$$reg), __ T$1$3,
+ as_FloatRegister($src$$reg), sh);', $2, S,`
int sh = (int)$shift$$constant;
if (sh >= 16) sh = 15;
- __ $2(as_FloatRegister($dst$$reg), __ T$3$5,
- as_FloatRegister($src$$reg), sh);', `
- __ $2(as_FloatRegister($dst$$reg), __ T$3$5,
+ __ ssra(as_FloatRegister($dst$$reg), __ T$1$3,
+ as_FloatRegister($src$$reg), sh);', `
+ __ ssra(as_FloatRegister($dst$$reg), __ T$1$3,
as_FloatRegister($src$$reg),
(int)$shift$$constant);')
%}
- ins_pipe(vshift`'ifelse($6, D, 64, 128)_imm);
+ ins_pipe(vshift`'ifelse($4, D, 64, 128)_imm);
%}')dnl
-dnl $1 $2 $3 $4 $5 $6
-VSLL(sshl, sshl, 8, B, B, D)
-VSLL(sshl, sshl, 16, B, B, X)
+dnl
+undefine(PREDICATE)dnl
+dnl
+dnl $1 $2 $3 $4
+VSLL(8, B, B, D)
+VSLL(16, B, B, X)
// Right shifts with vector shift count on aarch64 SIMD are implemented
// as left shift by negative shift count.
@@ -2199,8 +2253,6 @@ VSLL(sshl, sshl, 16, B, B, X)
// LoadVector RShiftCntV
// | /
// RShiftVI
-// Note: In inner loop, multiple neg instructions are used, which can be
-// moved to outer loop and merge into one neg instruction.
//
// Case 2: The vector shift count is from loading.
// This case isn't supported by middle-end now. But it's supported by
@@ -2210,61 +2262,83 @@ VSLL(sshl, sshl, 16, B, B, X)
// | /
// RShiftVI
//
-dnl $1 $2 $3 $4 $5 $6
-VSRA(negr, sshl, 8, B, B, D)
-VSRA(negr, sshl, 16, B, B, X)
-VSRL(negr, ushl, 8, B, B, D)
-VSRL(negr, ushl, 16, B, B, X)
-VSLL_IMM(shl, shl, 8, B, B, D)
-VSLL_IMM(shl, shl, 16, B, B, X)
-VSRA_IMM(sshr, sshr, 8, B, B, D)
-VSRA_IMM(sshr, sshr, 16, B, B, X)
-VSRL_IMM(ushr, ushr, 8, B, B, D)
-VSRL_IMM(ushr, ushr, 16, B, B, X)
-VSLL(sshl, sshl, 4, S, H, D)
-VSLL(sshl, sshl, 8, S, H, X)
-VSRA(negr, sshl, 4, S, H, D)
-VSRA(negr, sshl, 8, S, H, X)
-VSRL(negr, ushl, 4, S, H, D)
-VSRL(negr, ushl, 8, S, H, X)
-VSLL_IMM(shl, shl, 4, S, H, D)
-VSLL_IMM(shl, shl, 8, S, H, X)
-VSRA_IMM(sshr, sshr, 4, S, H, D)
-VSRA_IMM(sshr, sshr, 8, S, H, X)
-VSRL_IMM(ushr, ushr, 4, S, H, D)
-VSRL_IMM(ushr, ushr, 8, S, H, X)
-VSLL(sshl, sshl, 2, I, S, D)
-VSLL(sshl, sshl, 4, I, S, X)
-VSRA(negr, sshl, 2, I, S, D)
-VSRA(negr, sshl, 4, I, S, X)
-VSRL(negr, ushl, 2, I, S, D)
-VSRL(negr, ushl, 4, I, S, X)
-VSLL_IMM(shl, shl, 2, I, S, D)
-VSLL_IMM(shl, shl, 4, I, S, X)
-VSRA_IMM(sshr, sshr, 2, I, S, D)
-VSRA_IMM(sshr, sshr, 4, I, S, X)
-VSRL_IMM(ushr, ushr, 2, I, S, D)
-VSRL_IMM(ushr, ushr, 4, I, S, X)
-VSLL(sshl, sshl, 2, L, D, X)
-VSRA(negr, sshl, 2, L, D, X)
-VSRL(negr, ushl, 2, L, D, X)
-VSLL_IMM(shl, shl, 2, L, D, X)
-VSRA_IMM(sshr, sshr, 2, L, D, X)
-VSRL_IMM(ushr, ushr, 2, L, D, X)
-VSRAA_IMM(ssra, ssra, 8, B, B, D)
-VSRAA_IMM(ssra, ssra, 16, B, B, X)
-VSRAA_IMM(ssra, ssra, 4, S, H, D)
-VSRAA_IMM(ssra, ssra, 8, S, H, X)
-VSRAA_IMM(ssra, ssra, 2, I, S, D)
-VSRAA_IMM(ssra, ssra, 4, I, S, X)
-VSRAA_IMM(ssra, ssra, 2, L, D, X)
-VSRLA_IMM(usra, usra, 8, B, B, D)
-VSRLA_IMM(usra, usra, 16, B, B, X)
-VSRLA_IMM(usra, usra, 4, S, H, D)
-VSRLA_IMM(usra, usra, 8, S, H, X)
-VSRLA_IMM(usra, usra, 2, I, S, D)
-VSRLA_IMM(usra, usra, 4, I, S, X)
-VSRLA_IMM(usra, usra, 2, L, D, X)
+// The negate is conducted in RShiftCntV rule for case 1, whereas it's done in
+// RShiftV* rules for case 2. Because there exists an optimization opportunity
+// for case 1, that is, multiple neg instructions in inner loop can be hoisted
+// to outer loop and merged into one neg instruction.
+//
+// Note that ShiftVNode::is_var_shift() indicates whether the vector shift
+// count is a variable vector(case 2) or not(a vector generated by RShiftCntV,
+// i.e. case 1).
+dnl $1 $2 $3 $4
+VSRA(8, B, B, D)
+VSRA_VAR(8, B, B, D)
+VSRA(16, B, B, X)
+VSRA_VAR(16, B, B, X)
+VSRL(8, B, B, D)
+VSRL_VAR(8, B, B, D)
+VSRL(16, B, B, X)
+VSRL_VAR(16, B, B, X)
+VSLL_IMM(8, B, B, D)
+VSLL_IMM(16, B, B, X)
+VSRA_IMM(8, B, B, D)
+VSRA_IMM(16, B, B, X)
+VSRL_IMM(8, B, B, D)
+VSRL_IMM(16, B, B, X)
+VSLL(4, S, H, D)
+VSLL(8, S, H, X)
+VSRA(4, S, H, D)
+VSRA_VAR(4, S, H, D)
+VSRA(8, S, H, X)
+VSRA_VAR(8, S, H, X)
+VSRL(4, S, H, D)
+VSRL_VAR(4, S, H, D)
+VSRL(8, S, H, X)
+VSRL_VAR(8, S, H, X)
+VSLL_IMM(4, S, H, D)
+VSLL_IMM(8, S, H, X)
+VSRA_IMM(4, S, H, D)
+VSRA_IMM(8, S, H, X)
+VSRL_IMM(4, S, H, D)
+VSRL_IMM(8, S, H, X)
+VSLL(2, I, S, D)
+VSLL(4, I, S, X)
+VSRA(2, I, S, D)
+VSRA_VAR(2, I, S, D)
+VSRA(4, I, S, X)
+VSRA_VAR(4, I, S, X)
+VSRL(2, I, S, D)
+VSRL_VAR(2, I, S, D)
+VSRL(4, I, S, X)
+VSRL_VAR(4, I, S, X)
+VSLL_IMM(2, I, S, D)
+VSLL_IMM(4, I, S, X)
+VSRA_IMM(2, I, S, D)
+VSRA_IMM(4, I, S, X)
+VSRL_IMM(2, I, S, D)
+VSRL_IMM(4, I, S, X)
+VSLL(2, L, D, X)
+VSRA(2, L, D, X)
+VSRA_VAR(2, L, D, X)
+VSRL(2, L, D, X)
+VSRL_VAR(2, L, D, X)
+VSLL_IMM(2, L, D, X)
+VSRA_IMM(2, L, D, X)
+VSRL_IMM(2, L, D, X)
+VSRAA_IMM(8, B, B, D)
+VSRAA_IMM(16, B, B, X)
+VSRAA_IMM(4, S, H, D)
+VSRAA_IMM(8, S, H, X)
+VSRAA_IMM(2, I, S, D)
+VSRAA_IMM(4, I, S, X)
+VSRAA_IMM(2, L, D, X)
+VSRLA_IMM(8, B, B, D)
+VSRLA_IMM(16, B, B, X)
+VSRLA_IMM(4, S, H, D)
+VSRLA_IMM(8, S, H, X)
+VSRLA_IMM(2, I, S, D)
+VSRLA_IMM(4, I, S, X)
+VSRLA_IMM(2, L, D, X)
dnl
define(`VMINMAX', `
instruct v$1$3`'ifelse($5, S, F, D)`'(vec$6 dst, vec$6 src1, vec$6 src2)
diff --git a/src/hotspot/cpu/aarch64/assembler_aarch64.hpp b/src/hotspot/cpu/aarch64/assembler_aarch64.hpp
index 9482c3a65c2ec25c6aca521798e8c9f2204a393a..10fcdaa243c006afdb3e5ebc9e21a51f6970c2b4 100644
--- a/src/hotspot/cpu/aarch64/assembler_aarch64.hpp
+++ b/src/hotspot/cpu/aarch64/assembler_aarch64.hpp
@@ -987,33 +987,35 @@ public:
rf(rt, 0);
}
- void hint(int imm) {
- system(0b00, 0b011, 0b0010, 0b0000, imm);
- }
-
- void nop() {
- hint(0);
- }
-
- void yield() {
- hint(1);
- }
+ // Hint instructions
- void wfe() {
- hint(2);
+#define INSN(NAME, crm, op2) \
+ void NAME() { \
+ system(0b00, 0b011, 0b0010, crm, op2); \
}
- void wfi() {
- hint(3);
- }
+ INSN(nop, 0b000, 0b0000);
+ INSN(yield, 0b000, 0b0001);
+ INSN(wfe, 0b000, 0b0010);
+ INSN(wfi, 0b000, 0b0011);
+ INSN(sev, 0b000, 0b0100);
+ INSN(sevl, 0b000, 0b0101);
- void sev() {
- hint(4);
- }
+ INSN(autia1716, 0b0001, 0b100);
+ INSN(autiasp, 0b0011, 0b101);
+ INSN(autiaz, 0b0011, 0b100);
+ INSN(autib1716, 0b0001, 0b110);
+ INSN(autibsp, 0b0011, 0b111);
+ INSN(autibz, 0b0011, 0b110);
+ INSN(pacia1716, 0b0001, 0b000);
+ INSN(paciasp, 0b0011, 0b001);
+ INSN(paciaz, 0b0011, 0b000);
+ INSN(pacib1716, 0b0001, 0b010);
+ INSN(pacibsp, 0b0011, 0b011);
+ INSN(pacibz, 0b0011, 0b010);
+ INSN(xpaclri, 0b0000, 0b111);
- void sevl() {
- hint(5);
- }
+#undef INSN
// we only provide mrs and msr for the special purpose system
// registers where op1 (instr[20:19]) == 11 and, (currently) only
@@ -1099,18 +1101,21 @@ public:
}
// Unconditional branch (register)
- void branch_reg(Register R, int opc) {
+
+ void branch_reg(int OP, int A, int M, Register RN, Register RM) {
starti;
f(0b1101011, 31, 25);
- f(opc, 24, 21);
- f(0b11111000000, 20, 10);
- rf(R, 5);
- f(0b00000, 4, 0);
+ f(OP, 24, 21);
+ f(0b111110000, 20, 12);
+ f(A, 11, 11);
+ f(M, 10, 10);
+ rf(RN, 5);
+ rf(RM, 0);
}
-#define INSN(NAME, opc) \
- void NAME(Register R) { \
- branch_reg(R, opc); \
+#define INSN(NAME, opc) \
+ void NAME(Register RN) { \
+ branch_reg(opc, 0, 0, RN, r0); \
}
INSN(br, 0b0000);
@@ -1121,14 +1126,48 @@ public:
#undef INSN
-#define INSN(NAME, opc) \
- void NAME() { \
- branch_reg(dummy_reg, opc); \
+#define INSN(NAME, opc) \
+ void NAME() { \
+ branch_reg(opc, 0, 0, dummy_reg, r0); \
}
INSN(eret, 0b0100);
INSN(drps, 0b0101);
+#undef INSN
+
+#define INSN(NAME, M) \
+ void NAME() { \
+ branch_reg(0b0010, 1, M, dummy_reg, dummy_reg); \
+ }
+
+ INSN(retaa, 0);
+ INSN(retab, 1);
+
+#undef INSN
+
+#define INSN(NAME, OP, M) \
+ void NAME(Register rn) { \
+ branch_reg(OP, 1, M, rn, dummy_reg); \
+ }
+
+ INSN(braaz, 0b0000, 0);
+ INSN(brabz, 0b0000, 1);
+ INSN(blraaz, 0b0001, 0);
+ INSN(blrabz, 0b0001, 1);
+
+#undef INSN
+
+#define INSN(NAME, OP, M) \
+ void NAME(Register rn, Register rm) { \
+ branch_reg(OP, 1, M, rn, rm); \
+ }
+
+ INSN(braa, 0b1000, 0);
+ INSN(brab, 0b1000, 1);
+ INSN(blraa, 0b1001, 0);
+ INSN(blrab, 0b1001, 1);
+
#undef INSN
// Load/store exclusive
@@ -1792,6 +1831,37 @@ void mvnw(Register Rd, Register Rm,
INSN(clz, 0b110, 0b00000, 0b00100);
INSN(cls, 0b110, 0b00000, 0b00101);
+ // PAC instructions
+ INSN(pacia, 0b110, 0b00001, 0b00000);
+ INSN(pacib, 0b110, 0b00001, 0b00001);
+ INSN(pacda, 0b110, 0b00001, 0b00010);
+ INSN(pacdb, 0b110, 0b00001, 0b00011);
+ INSN(autia, 0b110, 0b00001, 0b00100);
+ INSN(autib, 0b110, 0b00001, 0b00101);
+ INSN(autda, 0b110, 0b00001, 0b00110);
+ INSN(autdb, 0b110, 0b00001, 0b00111);
+
+#undef INSN
+
+#define INSN(NAME, op29, opcode2, opcode) \
+ void NAME(Register Rd) { \
+ starti; \
+ f(opcode2, 20, 16); \
+ data_processing(current_insn, op29, opcode, Rd, dummy_reg); \
+ }
+
+ // PAC instructions (with zero modifier)
+ INSN(paciza, 0b110, 0b00001, 0b01000);
+ INSN(pacizb, 0b110, 0b00001, 0b01001);
+ INSN(pacdza, 0b110, 0b00001, 0b01010);
+ INSN(pacdzb, 0b110, 0b00001, 0b01011);
+ INSN(autiza, 0b110, 0b00001, 0b01100);
+ INSN(autizb, 0b110, 0b00001, 0b01101);
+ INSN(autdza, 0b110, 0b00001, 0b01110);
+ INSN(autdzb, 0b110, 0b00001, 0b01111);
+ INSN(xpaci, 0b110, 0b00001, 0b10000);
+ INSN(xpacd, 0b110, 0b00001, 0b10001);
+
#undef INSN
// (2 sources)
diff --git a/src/hotspot/cpu/aarch64/c1_Runtime1_aarch64.cpp b/src/hotspot/cpu/aarch64/c1_Runtime1_aarch64.cpp
index 005f739f0aa0566a83bda8adff7d2cc0106ff3b7..342aa87a6208d9b3059be01e910635f8ada97509 100644
--- a/src/hotspot/cpu/aarch64/c1_Runtime1_aarch64.cpp
+++ b/src/hotspot/cpu/aarch64/c1_Runtime1_aarch64.cpp
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 1999, 2021, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 1999, 2022, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2014, 2021, Red Hat Inc. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
@@ -385,6 +385,7 @@ OopMapSet* Runtime1::generate_handle_exception(StubID id, StubAssembler *sasm) {
// load issuing PC (the return address for this stub) into r3
__ ldr(exception_pc, Address(rfp, 1*BytesPerWord));
+ __ authenticate_return_address(exception_pc, rscratch1);
// make sure that the vm_results are cleared (may be unnecessary)
__ str(zr, Address(rthread, JavaThread::vm_result_offset()));
@@ -433,6 +434,7 @@ OopMapSet* Runtime1::generate_handle_exception(StubID id, StubAssembler *sasm) {
__ str(exception_pc, Address(rthread, JavaThread::exception_pc_offset()));
// patch throwing pc into return address (has bci & oop map)
+ __ protect_return_address(exception_pc, rscratch1);
__ str(exception_pc, Address(rfp, 1*BytesPerWord));
// compute the exception handler.
@@ -448,6 +450,7 @@ OopMapSet* Runtime1::generate_handle_exception(StubID id, StubAssembler *sasm) {
__ invalidate_registers(false, true, true, true, true, true);
// patch the return address, this stub will directly return to the exception handler
+ __ protect_return_address(r0, rscratch1);
__ str(r0, Address(rfp, 1*BytesPerWord));
switch (id) {
@@ -496,10 +499,12 @@ void Runtime1::generate_unwind_exception(StubAssembler *sasm) {
// Save our return address because
// exception_handler_for_return_address will destroy it. We also
// save exception_oop
+ __ mov(r3, lr);
+ __ protect_return_address();
__ stp(lr, exception_oop, Address(__ pre(sp, -2 * wordSize)));
// search the exception handler address of the caller (using the return address)
- __ call_VM_leaf(CAST_FROM_FN_PTR(address, SharedRuntime::exception_handler_for_return_address), rthread, lr);
+ __ call_VM_leaf(CAST_FROM_FN_PTR(address, SharedRuntime::exception_handler_for_return_address), rthread, r3);
// r0: exception handler address of the caller
// Only R0 is valid at this time; all other registers have been
@@ -512,6 +517,7 @@ void Runtime1::generate_unwind_exception(StubAssembler *sasm) {
// get throwing pc (= return address).
// lr has been destroyed by the call
__ ldp(lr, exception_oop, Address(__ post(sp, 2 * wordSize)));
+ __ authenticate_return_address();
__ mov(r3, lr);
__ verify_not_null_oop(exception_oop);
diff --git a/src/hotspot/cpu/aarch64/frame_aarch64.cpp b/src/hotspot/cpu/aarch64/frame_aarch64.cpp
index cb59e8b12afc79f32b5e3793d216ed49b4beb000..3363e53690e474bf5dbde3ef92c4f87601b96e08 100644
--- a/src/hotspot/cpu/aarch64/frame_aarch64.cpp
+++ b/src/hotspot/cpu/aarch64/frame_aarch64.cpp
@@ -128,13 +128,13 @@ bool frame::safe_for_sender(JavaThread *thread) {
return false;
}
- sender_pc = (address) this->fp()[return_addr_offset];
// for interpreted frames, the value below is the sender "raw" sp,
// which can be different from the sender unextended sp (the sp seen
// by the sender) because of current frame local variables
sender_sp = (intptr_t*) addr_at(sender_sp_offset);
sender_unextended_sp = (intptr_t*) this->fp()[interpreter_frame_sender_sp_offset];
saved_fp = (intptr_t*) this->fp()[link_offset];
+ sender_pc = pauth_strip_verifiable((address) this->fp()[return_addr_offset], (address)saved_fp);
} else {
// must be some sort of compiled/runtime frame
@@ -151,9 +151,9 @@ bool frame::safe_for_sender(JavaThread *thread) {
return false;
}
sender_unextended_sp = sender_sp;
- sender_pc = (address) *(sender_sp-1);
// Note: frame::sender_sp_offset is only valid for compiled frame
saved_fp = (intptr_t*) *(sender_sp - frame::sender_sp_offset);
+ sender_pc = pauth_strip_verifiable((address) *(sender_sp-1), (address)saved_fp);
}
@@ -268,17 +268,22 @@ bool frame::safe_for_sender(JavaThread *thread) {
void frame::patch_pc(Thread* thread, address pc) {
assert(_cb == CodeCache::find_blob(pc), "unexpected pc");
address* pc_addr = &(((address*) sp())[-1]);
+ address signing_sp = (((address*) sp())[-2]);
+ address signed_pc = pauth_sign_return_address(pc, (address)signing_sp);
+ address pc_old = pauth_strip_verifiable(*pc_addr, (address)signing_sp);
if (TracePcPatching) {
- tty->print_cr("patch_pc at address " INTPTR_FORMAT " [" INTPTR_FORMAT " -> " INTPTR_FORMAT "]",
- p2i(pc_addr), p2i(*pc_addr), p2i(pc));
+ tty->print("patch_pc at address " INTPTR_FORMAT " [" INTPTR_FORMAT " -> " INTPTR_FORMAT "]",
+ p2i(pc_addr), p2i(pc_old), p2i(pc));
+ if (VM_Version::use_rop_protection()) {
+ tty->print(" [signed " INTPTR_FORMAT " -> " INTPTR_FORMAT "]", p2i(*pc_addr), p2i(signed_pc));
+ }
+ tty->print_cr("");
}
- // Only generated code frames should be patched, therefore the return address will not be signed.
- assert(pauth_ptr_is_raw(*pc_addr), "cannot be signed");
// Either the return address is the original one or we are going to
// patch in the same address that's already there.
- assert(_pc == *pc_addr || pc == *pc_addr, "must be");
- *pc_addr = pc;
+ assert(_pc == pc_old || pc == pc_old, "must be");
+ *pc_addr = signed_pc;
address original_pc = CompiledMethod::get_deopt_original_pc(this);
if (original_pc != NULL) {
assert(original_pc == _pc, "expected original PC to be stored before patching");
@@ -455,12 +460,12 @@ frame frame::sender_for_interpreter_frame(RegisterMap* map) const {
}
#endif // COMPILER2_OR_JVMCI
- // Use the raw version of pc - the interpreter should not have signed it.
+ // For ROP protection, Interpreter will have signed the sender_pc, but there is no requirement to authenticate it here.
+ address sender_pc = pauth_strip_verifiable(sender_pc_maybe_signed(), (address)link());
- return frame(sender_sp, unextended_sp, link(), sender_pc_maybe_signed());
+ return frame(sender_sp, unextended_sp, link(), sender_pc);
}
-
//------------------------------------------------------------------------------
// frame::sender_for_compiled_frame
frame frame::sender_for_compiled_frame(RegisterMap* map) const {
@@ -482,7 +487,9 @@ frame frame::sender_for_compiled_frame(RegisterMap* map) const {
intptr_t* unextended_sp = l_sender_sp;
// the return_address is always the word on the stack
- address sender_pc = (address) *(l_sender_sp-1);
+
+ // For ROP protection, C1/C2 will have signed the sender_pc, but there is no requirement to authenticate it here.
+ address sender_pc = pauth_strip_verifiable((address) *(l_sender_sp-1), (address) *(l_sender_sp-2));
intptr_t** saved_fp_addr = (intptr_t**) (l_sender_sp - frame::sender_sp_offset);
@@ -530,6 +537,9 @@ frame frame::sender_raw(RegisterMap* map) const {
// Must be native-compiled frame, i.e. the marshaling code for native
// methods that exists in the core system.
+ // Native code may or may not have signed the return address, we have no way to be sure or what
+ // signing methods they used. Instead, just ensure the stripped value is used.
+
return frame(sender_sp(), link(), sender_pc());
}
diff --git a/src/hotspot/cpu/aarch64/frame_aarch64.inline.hpp b/src/hotspot/cpu/aarch64/frame_aarch64.inline.hpp
index b0fe436ca59609a5475c4503b755675a0c55bd58..20b5b8e8662484d0f3319416f3ce79f373b6f959 100644
--- a/src/hotspot/cpu/aarch64/frame_aarch64.inline.hpp
+++ b/src/hotspot/cpu/aarch64/frame_aarch64.inline.hpp
@@ -148,10 +148,12 @@ inline intptr_t* frame::id(void) const { return unextended_sp(); }
inline bool frame::is_older(intptr_t* id) const { assert(this->id() != NULL && id != NULL, "NULL frame id");
return this->id() > id ; }
-
-
inline intptr_t* frame::link() const { return (intptr_t*) *(intptr_t **)addr_at(link_offset); }
+inline intptr_t* frame::link_or_null() const {
+ intptr_t** ptr = (intptr_t **)addr_at(link_offset);
+ return os::is_readable_pointer(ptr) ? *ptr : NULL;
+}
inline intptr_t* frame::unextended_sp() const { return _unextended_sp; }
diff --git a/src/hotspot/cpu/aarch64/gc/g1/g1BarrierSetAssembler_aarch64.cpp b/src/hotspot/cpu/aarch64/gc/g1/g1BarrierSetAssembler_aarch64.cpp
index cd689b008e05ee143542d98e48c225600785892b..01aff54c96d5582e65dec49346fc993b507d2abd 100644
--- a/src/hotspot/cpu/aarch64/gc/g1/g1BarrierSetAssembler_aarch64.cpp
+++ b/src/hotspot/cpu/aarch64/gc/g1/g1BarrierSetAssembler_aarch64.cpp
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2018, 2020, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2018, 2022, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -271,7 +271,7 @@ void G1BarrierSetAssembler::load_at(MacroAssembler* masm, DecoratorSet decorator
ModRefBarrierSetAssembler::load_at(masm, decorators, type, dst, src, tmp1, tmp_thread);
if (on_oop && on_reference) {
// LR is live. It must be saved around calls.
- __ enter(); // barrier may call runtime
+ __ enter(/*strip_ret_addr*/true); // barrier may call runtime
// Generate the G1 pre-barrier code to log the value of
// the referent field in an SATB buffer.
g1_write_barrier_pre(masm /* masm */,
diff --git a/src/hotspot/cpu/aarch64/gc/shenandoah/shenandoahBarrierSetAssembler_aarch64.cpp b/src/hotspot/cpu/aarch64/gc/shenandoah/shenandoahBarrierSetAssembler_aarch64.cpp
index 53de1d921fca33dc4213bbdfb1574c48d9a901eb..bcabb40e63cbea92dfe2ec363c7206908ce7c2a6 100644
--- a/src/hotspot/cpu/aarch64/gc/shenandoah/shenandoahBarrierSetAssembler_aarch64.cpp
+++ b/src/hotspot/cpu/aarch64/gc/shenandoah/shenandoahBarrierSetAssembler_aarch64.cpp
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2018, 2021, Red Hat, Inc. All rights reserved.
+ * Copyright (c) 2018, 2022, Red Hat, Inc. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -237,7 +237,7 @@ void ShenandoahBarrierSetAssembler::load_reference_barrier(MacroAssembler* masm,
bool is_narrow = UseCompressedOops && !is_native;
Label heap_stable, not_cset;
- __ enter();
+ __ enter(/*strip_ret_addr*/true);
Address gc_state(rthread, in_bytes(ShenandoahThreadLocalData::gc_state_offset()));
__ ldrb(rscratch2, gc_state);
@@ -359,7 +359,7 @@ void ShenandoahBarrierSetAssembler::load_at(MacroAssembler* masm, DecoratorSet d
// 3: apply keep-alive barrier if needed
if (ShenandoahBarrierSet::need_keep_alive_barrier(decorators, type)) {
- __ enter();
+ __ enter(/*strip_ret_addr*/true);
__ push_call_clobbered_registers();
satb_write_barrier_pre(masm /* masm */,
noreg /* obj */,
diff --git a/src/hotspot/cpu/aarch64/gc/z/zBarrierSetAssembler_aarch64.cpp b/src/hotspot/cpu/aarch64/gc/z/zBarrierSetAssembler_aarch64.cpp
index 10b1cf20ef910240b4988db7d611a1b778d71ca4..6820be15950ec50f975ee6c870b52c2024317882 100644
--- a/src/hotspot/cpu/aarch64/gc/z/zBarrierSetAssembler_aarch64.cpp
+++ b/src/hotspot/cpu/aarch64/gc/z/zBarrierSetAssembler_aarch64.cpp
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2019, 2021, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2019, 2022, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -78,7 +78,7 @@ void ZBarrierSetAssembler::load_at(MacroAssembler* masm,
__ tst(dst, rscratch1);
__ br(Assembler::EQ, done);
- __ enter();
+ __ enter(/*strip_ret_addr*/true);
__ push_call_clobbered_registers_except(RegSet::of(dst));
diff --git a/src/hotspot/cpu/aarch64/globals_aarch64.hpp b/src/hotspot/cpu/aarch64/globals_aarch64.hpp
index 82760cc3bcf066becfedc9c5fb279c2a14cd1c89..443eb46b720ab97ad5381f40c8fffa701ddd6f53 100644
--- a/src/hotspot/cpu/aarch64/globals_aarch64.hpp
+++ b/src/hotspot/cpu/aarch64/globals_aarch64.hpp
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2000, 2021, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2000, 2022, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2015, 2019, Red Hat Inc. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
@@ -118,7 +118,9 @@ define_pd_global(intx, InlineSmallCode, 1000);
product(uint, OnSpinWaitInstCount, 1, DIAGNOSTIC, \
"The number of OnSpinWaitInst instructions to generate." \
"It cannot be used with OnSpinWaitInst=none.") \
- range(1, 99)
+ range(1, 99) \
+ product(ccstr, UseBranchProtection, "none", \
+ "Branch Protection to use: none, standard, pac-ret") \
// end of ARCH_FLAGS
diff --git a/src/hotspot/cpu/aarch64/macroAssembler_aarch64.cpp b/src/hotspot/cpu/aarch64/macroAssembler_aarch64.cpp
index 69124c299c15145aef0debd8e49596f396dd35d6..ad1a6d58596ff0e08ca6f1f9b188007fad252227 100644
--- a/src/hotspot/cpu/aarch64/macroAssembler_aarch64.cpp
+++ b/src/hotspot/cpu/aarch64/macroAssembler_aarch64.cpp
@@ -1137,6 +1137,8 @@ void MacroAssembler::verify_oop(Register reg, const char* s) {
}
BLOCK_COMMENT("verify_oop {");
+ strip_return_address(); // This might happen within a stack frame.
+ protect_return_address();
stp(r0, rscratch1, Address(pre(sp, -2 * wordSize)));
stp(rscratch2, lr, Address(pre(sp, -2 * wordSize)));
@@ -1150,6 +1152,7 @@ void MacroAssembler::verify_oop(Register reg, const char* s) {
ldp(rscratch2, lr, Address(post(sp, 2 * wordSize)));
ldp(r0, rscratch1, Address(post(sp, 2 * wordSize)));
+ authenticate_return_address();
BLOCK_COMMENT("} verify_oop");
}
@@ -1166,6 +1169,8 @@ void MacroAssembler::verify_oop_addr(Address addr, const char* s) {
}
BLOCK_COMMENT("verify_oop_addr {");
+ strip_return_address(); // This might happen within a stack frame.
+ protect_return_address();
stp(r0, rscratch1, Address(pre(sp, -2 * wordSize)));
stp(rscratch2, lr, Address(pre(sp, -2 * wordSize)));
@@ -1186,6 +1191,7 @@ void MacroAssembler::verify_oop_addr(Address addr, const char* s) {
ldp(rscratch2, lr, Address(post(sp, 2 * wordSize)));
ldp(r0, rscratch1, Address(post(sp, 2 * wordSize)));
+ authenticate_return_address();
BLOCK_COMMENT("} verify_oop_addr");
}
@@ -2537,7 +2543,7 @@ void MacroAssembler::debug64(char* msg, int64_t pc, int64_t regs[])
fatal("DEBUG MESSAGE: %s", msg);
}
-RegSet MacroAssembler::call_clobbered_registers() {
+RegSet MacroAssembler::call_clobbered_gp_registers() {
RegSet regs = RegSet::range(r0, r17) - RegSet::of(rscratch1, rscratch2);
#ifndef R18_RESERVED
regs += r18_tls;
@@ -2547,7 +2553,7 @@ RegSet MacroAssembler::call_clobbered_registers() {
void MacroAssembler::push_call_clobbered_registers_except(RegSet exclude) {
int step = 4 * wordSize;
- push(call_clobbered_registers() - exclude, sp);
+ push(call_clobbered_gp_registers() - exclude, sp);
sub(sp, sp, step);
mov(rscratch1, -step);
// Push v0-v7, v16-v31.
@@ -2569,7 +2575,7 @@ void MacroAssembler::pop_call_clobbered_registers_except(RegSet exclude) {
reinitialize_ptrue();
- pop(call_clobbered_registers() - exclude, sp);
+ pop(call_clobbered_gp_registers() - exclude, sp);
}
void MacroAssembler::push_CPU_state(bool save_vectors, bool use_sve,
@@ -4296,6 +4302,7 @@ void MacroAssembler::load_byte_map_base(Register reg) {
void MacroAssembler::build_frame(int framesize) {
assert(framesize >= 2 * wordSize, "framesize must include space for FP/LR");
assert(framesize % (2*wordSize) == 0, "must preserve 2*wordSize alignment");
+ protect_return_address();
if (framesize < ((1 << 9) + 2 * wordSize)) {
sub(sp, sp, framesize);
stp(rfp, lr, Address(sp, framesize - 2 * wordSize));
@@ -4328,19 +4335,21 @@ void MacroAssembler::remove_frame(int framesize) {
}
ldp(rfp, lr, Address(post(sp, 2 * wordSize)));
}
+ authenticate_return_address();
}
-// This method checks if provided byte array contains byte with highest bit set.
-address MacroAssembler::has_negatives(Register ary1, Register len, Register result) {
+// This method counts leading positive bytes (highest bit not set) in provided byte array
+address MacroAssembler::count_positives(Register ary1, Register len, Register result) {
// Simple and most common case of aligned small array which is not at the
// end of memory page is placed here. All other cases are in stub.
Label LOOP, END, STUB, STUB_LONG, SET_RESULT, DONE;
const uint64_t UPPER_BIT_MASK=0x8080808080808080;
assert_different_registers(ary1, len, result);
+ mov(result, len);
cmpw(len, 0);
- br(LE, SET_RESULT);
+ br(LE, DONE);
cmpw(len, 4 * wordSize);
br(GE, STUB_LONG); // size > 32 then go to stub
@@ -4359,19 +4368,20 @@ address MacroAssembler::has_negatives(Register ary1, Register len, Register resu
subs(len, len, wordSize);
br(GE, LOOP);
cmpw(len, -wordSize);
- br(EQ, SET_RESULT);
+ br(EQ, DONE);
BIND(END);
- ldr(result, Address(ary1));
- sub(len, zr, len, LSL, 3); // LSL 3 is to get bits from bytes
- lslv(result, result, len);
- tst(result, UPPER_BIT_MASK);
- b(SET_RESULT);
+ ldr(rscratch1, Address(ary1));
+ sub(rscratch2, zr, len, LSL, 3); // LSL 3 is to get bits from bytes
+ lslv(rscratch1, rscratch1, rscratch2);
+ tst(rscratch1, UPPER_BIT_MASK);
+ br(NE, SET_RESULT);
+ b(DONE);
BIND(STUB);
- RuntimeAddress has_neg = RuntimeAddress(StubRoutines::aarch64::has_negatives());
- assert(has_neg.target() != NULL, "has_negatives stub has not been generated");
- address tpc1 = trampoline_call(has_neg);
+ RuntimeAddress count_pos = RuntimeAddress(StubRoutines::aarch64::count_positives());
+ assert(count_pos.target() != NULL, "count_positives stub has not been generated");
+ address tpc1 = trampoline_call(count_pos);
if (tpc1 == NULL) {
DEBUG_ONLY(reset_labels(STUB_LONG, SET_RESULT, DONE));
postcond(pc() == badAddress);
@@ -4380,9 +4390,9 @@ address MacroAssembler::has_negatives(Register ary1, Register len, Register resu
b(DONE);
BIND(STUB_LONG);
- RuntimeAddress has_neg_long = RuntimeAddress(StubRoutines::aarch64::has_negatives_long());
- assert(has_neg_long.target() != NULL, "has_negatives stub has not been generated");
- address tpc2 = trampoline_call(has_neg_long);
+ RuntimeAddress count_pos_long = RuntimeAddress(StubRoutines::aarch64::count_positives_long());
+ assert(count_pos_long.target() != NULL, "count_positives_long stub has not been generated");
+ address tpc2 = trampoline_call(count_pos_long);
if (tpc2 == NULL) {
DEBUG_ONLY(reset_labels(SET_RESULT, DONE));
postcond(pc() == badAddress);
@@ -4391,7 +4401,9 @@ address MacroAssembler::has_negatives(Register ary1, Register len, Register resu
b(DONE);
BIND(SET_RESULT);
- cset(result, NE); // set true or false
+
+ add(len, len, wordSize);
+ sub(result, result, len);
BIND(DONE);
postcond(pc() != badAddress);
@@ -5169,6 +5181,7 @@ void MacroAssembler::get_thread(Register dst) {
LINUX_ONLY(RegSet::range(r0, r1) + lr - dst)
NOT_LINUX (RegSet::range(r0, r17) + lr - dst);
+ protect_return_address();
push(saved_regs, sp);
mov(lr, CAST_FROM_FN_PTR(address, JavaThread::aarch64_get_thread_helper));
@@ -5178,6 +5191,7 @@ void MacroAssembler::get_thread(Register dst) {
}
pop(saved_regs, sp);
+ authenticate_return_address();
}
void MacroAssembler::cache_wb(Address line) {
@@ -5269,3 +5283,102 @@ void MacroAssembler::spin_wait() {
}
}
}
+
+// Stack frame creation/removal
+
+void MacroAssembler::enter(bool strip_ret_addr) {
+ if (strip_ret_addr) {
+ // Addresses can only be signed once. If there are multiple nested frames being created
+ // in the same function, then the return address needs stripping first.
+ strip_return_address();
+ }
+ protect_return_address();
+ stp(rfp, lr, Address(pre(sp, -2 * wordSize)));
+ mov(rfp, sp);
+}
+
+void MacroAssembler::leave() {
+ mov(sp, rfp);
+ ldp(rfp, lr, Address(post(sp, 2 * wordSize)));
+ authenticate_return_address();
+}
+
+// ROP Protection
+// Use the AArch64 PAC feature to add ROP protection for generated code. Use whenever creating/
+// destroying stack frames or whenever directly loading/storing the LR to memory.
+// If ROP protection is not set then these functions are no-ops.
+// For more details on PAC see pauth_aarch64.hpp.
+
+// Sign the LR. Use during construction of a stack frame, before storing the LR to memory.
+// Uses the FP as the modifier.
+//
+void MacroAssembler::protect_return_address() {
+ if (VM_Version::use_rop_protection()) {
+ check_return_address();
+ // The standard convention for C code is to use paciasp, which uses SP as the modifier. This
+ // works because in C code, FP and SP match on function entry. In the JDK, SP and FP may not
+ // match, so instead explicitly use the FP.
+ pacia(lr, rfp);
+ }
+}
+
+// Sign the return value in the given register. Use before updating the LR in the exisiting stack
+// frame for the current function.
+// Uses the FP from the start of the function as the modifier - which is stored at the address of
+// the current FP.
+//
+void MacroAssembler::protect_return_address(Register return_reg, Register temp_reg) {
+ if (VM_Version::use_rop_protection()) {
+ assert(PreserveFramePointer, "PreserveFramePointer must be set for ROP protection");
+ check_return_address(return_reg);
+ ldr(temp_reg, Address(rfp));
+ pacia(return_reg, temp_reg);
+ }
+}
+
+// Authenticate the LR. Use before function return, after restoring FP and loading LR from memory.
+//
+void MacroAssembler::authenticate_return_address(Register return_reg) {
+ if (VM_Version::use_rop_protection()) {
+ autia(return_reg, rfp);
+ check_return_address(return_reg);
+ }
+}
+
+// Authenticate the return value in the given register. Use before updating the LR in the exisiting
+// stack frame for the current function.
+// Uses the FP from the start of the function as the modifier - which is stored at the address of
+// the current FP.
+//
+void MacroAssembler::authenticate_return_address(Register return_reg, Register temp_reg) {
+ if (VM_Version::use_rop_protection()) {
+ assert(PreserveFramePointer, "PreserveFramePointer must be set for ROP protection");
+ ldr(temp_reg, Address(rfp));
+ autia(return_reg, temp_reg);
+ check_return_address(return_reg);
+ }
+}
+
+// Strip any PAC data from LR without performing any authentication. Use with caution - only if
+// there is no guaranteed way of authenticating the LR.
+//
+void MacroAssembler::strip_return_address() {
+ if (VM_Version::use_rop_protection()) {
+ xpaclri();
+ }
+}
+
+#ifndef PRODUCT
+// PAC failures can be difficult to debug. After an authentication failure, a segfault will only
+// occur when the pointer is used - ie when the program returns to the invalid LR. At this point
+// it is difficult to debug back to the callee function.
+// This function simply loads from the address in the given register.
+// Use directly after authentication to catch authentication failures.
+// Also use before signing to check that the pointer is valid and hasn't already been signed.
+//
+void MacroAssembler::check_return_address(Register return_reg) {
+ if (VM_Version::use_rop_protection()) {
+ ldr(zr, Address(return_reg));
+ }
+}
+#endif
diff --git a/src/hotspot/cpu/aarch64/macroAssembler_aarch64.hpp b/src/hotspot/cpu/aarch64/macroAssembler_aarch64.hpp
index 16f9790bde42c20cb808f6c2321b7135c1279452..d5bfc42e784d43070918d936f30c042d2b259a19 100644
--- a/src/hotspot/cpu/aarch64/macroAssembler_aarch64.hpp
+++ b/src/hotspot/cpu/aarch64/macroAssembler_aarch64.hpp
@@ -468,7 +468,7 @@ public:
void push_fp(FloatRegSet regs, Register stack) { if (regs.bits()) push_fp(regs.bits(), stack); }
void pop_fp(FloatRegSet regs, Register stack) { if (regs.bits()) pop_fp(regs.bits(), stack); }
- static RegSet call_clobbered_registers();
+ static RegSet call_clobbered_gp_registers();
void push_p(PRegSet regs, Register stack) { if (regs.bits()) push_p(regs.bits(), stack); }
void pop_p(PRegSet regs, Register stack) { if (regs.bits()) pop_p(regs.bits(), stack); }
@@ -688,16 +688,16 @@ public:
void align(int modulus);
// Stack frame creation/removal
- void enter()
- {
- stp(rfp, lr, Address(pre(sp, -2 * wordSize)));
- mov(rfp, sp);
- }
- void leave()
- {
- mov(sp, rfp);
- ldp(rfp, lr, Address(post(sp, 2 * wordSize)));
- }
+ void enter(bool strip_ret_addr = false);
+ void leave();
+
+ // ROP Protection
+ void protect_return_address();
+ void protect_return_address(Register return_reg, Register temp_reg);
+ void authenticate_return_address(Register return_reg = lr);
+ void authenticate_return_address(Register return_reg, Register temp_reg);
+ void strip_return_address();
+ void check_return_address(Register return_reg=lr) PRODUCT_RETURN;
// Support for getting the JavaThread pointer (i.e.; a reference to thread-local information)
// The pointer will be loaded into the thread register.
@@ -1234,7 +1234,7 @@ public:
Register table0, Register table1, Register table2, Register table3,
bool upper = false);
- address has_negatives(Register ary1, Register len, Register result);
+ address count_positives(Register ary1, Register len, Register result);
address arrays_equals(Register a1, Register a2, Register result, Register cnt1,
Register tmp1, Register tmp2, Register tmp3, int elem_size);
diff --git a/src/hotspot/cpu/aarch64/matcher_aarch64.hpp b/src/hotspot/cpu/aarch64/matcher_aarch64.hpp
index aca82240a5731fb10d8734bbce8411763b8744de..c2f801f408ade3700e79d217b418504a08d5fb7e 100644
--- a/src/hotspot/cpu/aarch64/matcher_aarch64.hpp
+++ b/src/hotspot/cpu/aarch64/matcher_aarch64.hpp
@@ -163,4 +163,10 @@
// Implements a variant of EncodeISOArrayNode that encode ASCII only
static const bool supports_encode_ascii_array = true;
+ // Returns pre-selection estimated size of a vector operation.
+ static int vector_op_pre_select_sz_estimate(int vopc, BasicType ety, int vlen) {
+ return 0;
+ }
+
+
#endif // CPU_AARCH64_MATCHER_AARCH64_HPP
diff --git a/src/hotspot/cpu/aarch64/pauth_aarch64.hpp b/src/hotspot/cpu/aarch64/pauth_aarch64.hpp
index e12a671daf1e2552cab87b3ac3344bb9a5d61b65..fe5fbbce9f05f22aea98bfee586123c113764023 100644
--- a/src/hotspot/cpu/aarch64/pauth_aarch64.hpp
+++ b/src/hotspot/cpu/aarch64/pauth_aarch64.hpp
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2021, Arm Limited. All rights reserved.
+ * Copyright (c) 2021, 2022, Arm Limited. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -27,9 +27,58 @@
#include OS_CPU_HEADER_INLINE(pauth)
+// Support for ROP Protection in VM code.
+// This is provided via the AArch64 PAC feature.
+// For more details on PAC see The Arm ARM, section "Pointer authentication in AArch64 state".
+//
+// PAC provides a method to sign and authenticate pointer values. Signing combines the register
+// being signed, an additional modifier and a per-process secret key, writing the result to unused
+// high bits of the signed register. Once signed a register must be authenticated or stripped
+// before it can be used.
+// Authentication reverses the signing operation, clearing the high bits. If the signed register
+// or modifier has changed then authentication will fail and invalid data will be written to the
+// high bits and the next time the pointer is used a segfault will be raised.
+//
+// Assume a malicious attacker is able to edit the stack via an exploit. Control flow can be
+// changed by re-writing the return values stored on the stack. ROP protection prevents this by
+// signing return addresses before saving them on the stack, then authenticating when they are
+// loaded back. The scope of this protection is per function (a value is signed and authenticated
+// by the same function), therefore it is possible for different functions within the same
+// program to use different signing methods.
+//
+// The VM and native code is protected by compiling with the GCC AArch64 branch protection flag.
+//
+// All generated code is protected via the ROP functions provided in macroAssembler.
+//
+// In addition, the VM needs to be aware of PAC whenever viewing or editing the stack. Functions
+// are provided here and in the OS specific files. We should assume all stack frames for generated
+// code have signed return values. Rewriting the stack should ensure new values are correctly
+// signed. However, we cannot make any assumptions about how (or if) native code uses PAC - here
+// we should limit access to viewing via stripping.
+//
+
+
+// Confirm the given pointer has not been signed - ie none of the high bits are set.
+//
+// Note this can give false positives. The PAC signing can generate a signature with all signing
+// bits as zeros, causing this function to return true. Therefore this should only be used for
+// assert style checking. In addition, this function should never be used with a "not" to confirm
+// a pointer is signed, as it will fail the above case. The only safe way to do this is to instead
+// authenticate the pointer.
+//
inline bool pauth_ptr_is_raw(address ptr) {
- // Confirm none of the high bits are set in the pointer.
return ptr == pauth_strip_pointer(ptr);
}
+// Strip a return value (same as pauth_strip_pointer). When debug is enabled then authenticate
+// instead.
+//
+inline address pauth_strip_verifiable(address ret_addr, address modifier) {
+ if (VM_Version::use_rop_protection()) {
+ DEBUG_ONLY(ret_addr = pauth_authenticate_return_address(ret_addr, modifier);)
+ NOT_DEBUG(ret_addr = pauth_strip_pointer(ret_addr));
+ }
+ return ret_addr;
+}
+
#endif // CPU_AARCH64_PAUTH_AARCH64_HPP
diff --git a/src/hotspot/cpu/aarch64/register_aarch64.hpp b/src/hotspot/cpu/aarch64/register_aarch64.hpp
index 400eaeb90f20f8c4b121317329f2e4662e19325b..3fe2fae42a16c3ac24c8f320c45a5029f10854f2 100644
--- a/src/hotspot/cpu/aarch64/register_aarch64.hpp
+++ b/src/hotspot/cpu/aarch64/register_aarch64.hpp
@@ -314,115 +314,10 @@ class ConcreteRegisterImpl : public AbstractRegisterImpl {
static const int max_pr;
};
-template class RegSetIterator;
-
-// A set of registers
-template
-class AbstractRegSet {
- uint32_t _bitset;
-
- AbstractRegSet(uint32_t bitset) : _bitset(bitset) { }
-
-public:
-
- AbstractRegSet() : _bitset(0) { }
-
- AbstractRegSet(RegImpl r1) : _bitset(1 << r1->encoding()) { }
-
- AbstractRegSet operator+(const AbstractRegSet aSet) const {
- AbstractRegSet result(_bitset | aSet._bitset);
- return result;
- }
-
- AbstractRegSet operator-(const AbstractRegSet aSet) const {
- AbstractRegSet result(_bitset & ~aSet._bitset);
- return result;
- }
-
- AbstractRegSet &operator+=(const AbstractRegSet aSet) {
- *this = *this + aSet;
- return *this;
- }
-
- AbstractRegSet &operator-=(const AbstractRegSet aSet) {
- *this = *this - aSet;
- return *this;
- }
-
- static AbstractRegSet of(RegImpl r1) {
- return AbstractRegSet(r1);
- }
-
- static AbstractRegSet of(RegImpl r1, RegImpl r2) {
- return of(r1) + r2;
- }
-
- static AbstractRegSet of(RegImpl r1, RegImpl r2, RegImpl r3) {
- return of(r1, r2) + r3;
- }
-
- static AbstractRegSet of(RegImpl r1, RegImpl r2, RegImpl r3, RegImpl r4) {
- return of(r1, r2, r3) + r4;
- }
-
- static AbstractRegSet range(RegImpl start, RegImpl end) {
- uint32_t bits = ~0;
- bits <<= start->encoding();
- bits <<= 31 - end->encoding();
- bits >>= 31 - end->encoding();
-
- return AbstractRegSet(bits);
- }
-
- uint32_t bits() const { return _bitset; }
-
-private:
-
- RegImpl first();
-
-public:
-
- friend class RegSetIterator;
-
- RegSetIterator begin();
-};
-
typedef AbstractRegSet RegSet;
typedef AbstractRegSet FloatRegSet;
typedef AbstractRegSet PRegSet;
-template
-class RegSetIterator {
- AbstractRegSet _regs;
-
-public:
- RegSetIterator(AbstractRegSet x): _regs(x) {}
- RegSetIterator(const RegSetIterator& mit) : _regs(mit._regs) {}
-
- RegSetIterator& operator++() {
- RegImpl r = _regs.first();
- if (r->is_valid())
- _regs -= r;
- return *this;
- }
-
- bool operator==(const RegSetIterator& rhs) const {
- return _regs.bits() == rhs._regs.bits();
- }
- bool operator!=(const RegSetIterator& rhs) const {
- return ! (rhs == *this);
- }
-
- RegImpl operator*() {
- return _regs.first();
- }
-};
-
-template
-inline RegSetIterator AbstractRegSet::begin() {
- return RegSetIterator(*this);
-}
-
template <>
inline Register AbstractRegSet::first() {
uint32_t first = _bitset & -_bitset;
diff --git a/src/hotspot/cpu/aarch64/register_definitions_aarch64.cpp b/src/hotspot/cpu/aarch64/register_definitions_aarch64.cpp
deleted file mode 100644
index f48c70d09e6707bbd58aa75674cdf668c912ec68..0000000000000000000000000000000000000000
--- a/src/hotspot/cpu/aarch64/register_definitions_aarch64.cpp
+++ /dev/null
@@ -1,208 +0,0 @@
-/*
- * Copyright (c) 2002, 2020, Oracle and/or its affiliates. All rights reserved.
- * Copyright (c) 2014, Red Hat Inc. All rights reserved.
- * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
- *
- * This code is free software; you can redistribute it and/or modify it
- * under the terms of the GNU General Public License version 2 only, as
- * published by the Free Software Foundation.
- *
- * This code is distributed in the hope that it will be useful, but WITHOUT
- * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
- * version 2 for more details (a copy is included in the LICENSE file that
- * accompanied this code).
- *
- * You should have received a copy of the GNU General Public License version
- * 2 along with this work; if not, write to the Free Software Foundation,
- * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
- *
- * Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA
- * or visit www.oracle.com if you need additional information or have any
- * questions.
- *
- */
-
-#include "precompiled.hpp"
-#include "asm/assembler.hpp"
-#include "asm/macroAssembler.inline.hpp"
-#include "asm/register.hpp"
-#include "register_aarch64.hpp"
-# include "interp_masm_aarch64.hpp"
-
-REGISTER_DEFINITION(Register, noreg);
-
-REGISTER_DEFINITION(Register, r0);
-REGISTER_DEFINITION(Register, r1);
-REGISTER_DEFINITION(Register, r2);
-REGISTER_DEFINITION(Register, r3);
-REGISTER_DEFINITION(Register, r4);
-REGISTER_DEFINITION(Register, r5);
-REGISTER_DEFINITION(Register, r6);
-REGISTER_DEFINITION(Register, r7);
-REGISTER_DEFINITION(Register, r8);
-REGISTER_DEFINITION(Register, r9);
-REGISTER_DEFINITION(Register, r10);
-REGISTER_DEFINITION(Register, r11);
-REGISTER_DEFINITION(Register, r12);
-REGISTER_DEFINITION(Register, r13);
-REGISTER_DEFINITION(Register, r14);
-REGISTER_DEFINITION(Register, r15);
-REGISTER_DEFINITION(Register, r16);
-REGISTER_DEFINITION(Register, r17);
-REGISTER_DEFINITION(Register, r18_tls); // see comment in register_aarch64.hpp
-REGISTER_DEFINITION(Register, r19);
-REGISTER_DEFINITION(Register, r20);
-REGISTER_DEFINITION(Register, r21);
-REGISTER_DEFINITION(Register, r22);
-REGISTER_DEFINITION(Register, r23);
-REGISTER_DEFINITION(Register, r24);
-REGISTER_DEFINITION(Register, r25);
-REGISTER_DEFINITION(Register, r26);
-REGISTER_DEFINITION(Register, r27);
-REGISTER_DEFINITION(Register, r28);
-REGISTER_DEFINITION(Register, r29);
-REGISTER_DEFINITION(Register, r30);
-REGISTER_DEFINITION(Register, sp);
-
-REGISTER_DEFINITION(FloatRegister, fnoreg);
-
-REGISTER_DEFINITION(FloatRegister, v0);
-REGISTER_DEFINITION(FloatRegister, v1);
-REGISTER_DEFINITION(FloatRegister, v2);
-REGISTER_DEFINITION(FloatRegister, v3);
-REGISTER_DEFINITION(FloatRegister, v4);
-REGISTER_DEFINITION(FloatRegister, v5);
-REGISTER_DEFINITION(FloatRegister, v6);
-REGISTER_DEFINITION(FloatRegister, v7);
-REGISTER_DEFINITION(FloatRegister, v8);
-REGISTER_DEFINITION(FloatRegister, v9);
-REGISTER_DEFINITION(FloatRegister, v10);
-REGISTER_DEFINITION(FloatRegister, v11);
-REGISTER_DEFINITION(FloatRegister, v12);
-REGISTER_DEFINITION(FloatRegister, v13);
-REGISTER_DEFINITION(FloatRegister, v14);
-REGISTER_DEFINITION(FloatRegister, v15);
-REGISTER_DEFINITION(FloatRegister, v16);
-REGISTER_DEFINITION(FloatRegister, v17);
-REGISTER_DEFINITION(FloatRegister, v18);
-REGISTER_DEFINITION(FloatRegister, v19);
-REGISTER_DEFINITION(FloatRegister, v20);
-REGISTER_DEFINITION(FloatRegister, v21);
-REGISTER_DEFINITION(FloatRegister, v22);
-REGISTER_DEFINITION(FloatRegister, v23);
-REGISTER_DEFINITION(FloatRegister, v24);
-REGISTER_DEFINITION(FloatRegister, v25);
-REGISTER_DEFINITION(FloatRegister, v26);
-REGISTER_DEFINITION(FloatRegister, v27);
-REGISTER_DEFINITION(FloatRegister, v28);
-REGISTER_DEFINITION(FloatRegister, v29);
-REGISTER_DEFINITION(FloatRegister, v30);
-REGISTER_DEFINITION(FloatRegister, v31);
-
-REGISTER_DEFINITION(Register, zr);
-
-REGISTER_DEFINITION(Register, c_rarg0);
-REGISTER_DEFINITION(Register, c_rarg1);
-REGISTER_DEFINITION(Register, c_rarg2);
-REGISTER_DEFINITION(Register, c_rarg3);
-REGISTER_DEFINITION(Register, c_rarg4);
-REGISTER_DEFINITION(Register, c_rarg5);
-REGISTER_DEFINITION(Register, c_rarg6);
-REGISTER_DEFINITION(Register, c_rarg7);
-
-REGISTER_DEFINITION(FloatRegister, c_farg0);
-REGISTER_DEFINITION(FloatRegister, c_farg1);
-REGISTER_DEFINITION(FloatRegister, c_farg2);
-REGISTER_DEFINITION(FloatRegister, c_farg3);
-REGISTER_DEFINITION(FloatRegister, c_farg4);
-REGISTER_DEFINITION(FloatRegister, c_farg5);
-REGISTER_DEFINITION(FloatRegister, c_farg6);
-REGISTER_DEFINITION(FloatRegister, c_farg7);
-
-REGISTER_DEFINITION(Register, j_rarg0);
-REGISTER_DEFINITION(Register, j_rarg1);
-REGISTER_DEFINITION(Register, j_rarg2);
-REGISTER_DEFINITION(Register, j_rarg3);
-REGISTER_DEFINITION(Register, j_rarg4);
-REGISTER_DEFINITION(Register, j_rarg5);
-REGISTER_DEFINITION(Register, j_rarg6);
-REGISTER_DEFINITION(Register, j_rarg7);
-
-REGISTER_DEFINITION(FloatRegister, j_farg0);
-REGISTER_DEFINITION(FloatRegister, j_farg1);
-REGISTER_DEFINITION(FloatRegister, j_farg2);
-REGISTER_DEFINITION(FloatRegister, j_farg3);
-REGISTER_DEFINITION(FloatRegister, j_farg4);
-REGISTER_DEFINITION(FloatRegister, j_farg5);
-REGISTER_DEFINITION(FloatRegister, j_farg6);
-REGISTER_DEFINITION(FloatRegister, j_farg7);
-
-REGISTER_DEFINITION(Register, rscratch1);
-REGISTER_DEFINITION(Register, rscratch2);
-REGISTER_DEFINITION(Register, esp);
-REGISTER_DEFINITION(Register, rdispatch);
-REGISTER_DEFINITION(Register, rcpool);
-REGISTER_DEFINITION(Register, rmonitors);
-REGISTER_DEFINITION(Register, rlocals);
-REGISTER_DEFINITION(Register, rmethod);
-REGISTER_DEFINITION(Register, rbcp);
-
-REGISTER_DEFINITION(Register, lr);
-REGISTER_DEFINITION(Register, rfp);
-REGISTER_DEFINITION(Register, rthread);
-REGISTER_DEFINITION(Register, rheapbase);
-
-REGISTER_DEFINITION(Register, r31_sp);
-
-REGISTER_DEFINITION(FloatRegister, z0);
-REGISTER_DEFINITION(FloatRegister, z1);
-REGISTER_DEFINITION(FloatRegister, z2);
-REGISTER_DEFINITION(FloatRegister, z3);
-REGISTER_DEFINITION(FloatRegister, z4);
-REGISTER_DEFINITION(FloatRegister, z5);
-REGISTER_DEFINITION(FloatRegister, z6);
-REGISTER_DEFINITION(FloatRegister, z7);
-REGISTER_DEFINITION(FloatRegister, z8);
-REGISTER_DEFINITION(FloatRegister, z9);
-REGISTER_DEFINITION(FloatRegister, z10);
-REGISTER_DEFINITION(FloatRegister, z11);
-REGISTER_DEFINITION(FloatRegister, z12);
-REGISTER_DEFINITION(FloatRegister, z13);
-REGISTER_DEFINITION(FloatRegister, z14);
-REGISTER_DEFINITION(FloatRegister, z15);
-REGISTER_DEFINITION(FloatRegister, z16);
-REGISTER_DEFINITION(FloatRegister, z17);
-REGISTER_DEFINITION(FloatRegister, z18);
-REGISTER_DEFINITION(FloatRegister, z19);
-REGISTER_DEFINITION(FloatRegister, z20);
-REGISTER_DEFINITION(FloatRegister, z21);
-REGISTER_DEFINITION(FloatRegister, z22);
-REGISTER_DEFINITION(FloatRegister, z23);
-REGISTER_DEFINITION(FloatRegister, z24);
-REGISTER_DEFINITION(FloatRegister, z25);
-REGISTER_DEFINITION(FloatRegister, z26);
-REGISTER_DEFINITION(FloatRegister, z27);
-REGISTER_DEFINITION(FloatRegister, z28);
-REGISTER_DEFINITION(FloatRegister, z29);
-REGISTER_DEFINITION(FloatRegister, z30);
-REGISTER_DEFINITION(FloatRegister, z31);
-
-REGISTER_DEFINITION(PRegister, p0);
-REGISTER_DEFINITION(PRegister, p1);
-REGISTER_DEFINITION(PRegister, p2);
-REGISTER_DEFINITION(PRegister, p3);
-REGISTER_DEFINITION(PRegister, p4);
-REGISTER_DEFINITION(PRegister, p5);
-REGISTER_DEFINITION(PRegister, p6);
-REGISTER_DEFINITION(PRegister, p7);
-REGISTER_DEFINITION(PRegister, p8);
-REGISTER_DEFINITION(PRegister, p9);
-REGISTER_DEFINITION(PRegister, p10);
-REGISTER_DEFINITION(PRegister, p11);
-REGISTER_DEFINITION(PRegister, p12);
-REGISTER_DEFINITION(PRegister, p13);
-REGISTER_DEFINITION(PRegister, p14);
-REGISTER_DEFINITION(PRegister, p15);
-
-REGISTER_DEFINITION(PRegister, ptrue);
diff --git a/src/hotspot/cpu/aarch64/sharedRuntime_aarch64.cpp b/src/hotspot/cpu/aarch64/sharedRuntime_aarch64.cpp
index 08cc2b20a61e2ad6d33f09446d29942057009db9..18c6d227823075bcf802fde79cec599eb387c952 100644
--- a/src/hotspot/cpu/aarch64/sharedRuntime_aarch64.cpp
+++ b/src/hotspot/cpu/aarch64/sharedRuntime_aarch64.cpp
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2003, 2021, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2003, 2022, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2014, 2021, Red Hat Inc. All rights reserved.
* Copyright (c) 2021, Azul Systems, Inc. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
@@ -410,6 +410,7 @@ static void patch_callers_callsite(MacroAssembler *masm) {
__ mov(c_rarg0, rmethod);
__ mov(c_rarg1, lr);
+ __ authenticate_return_address(c_rarg1, rscratch1);
__ lea(rscratch1, RuntimeAddress(CAST_FROM_FN_PTR(address, SharedRuntime::fixup_callers_callsite)));
__ blr(rscratch1);
@@ -2178,8 +2179,8 @@ void SharedRuntime::generate_deopt_blob() {
// load throwing pc from JavaThread and patch it as the return address
// of the current frame. Then clear the field in JavaThread
-
__ ldr(r3, Address(rthread, JavaThread::exception_pc_offset()));
+ __ protect_return_address(r3, rscratch1);
__ str(r3, Address(rfp, wordSize));
__ str(zr, Address(rthread, JavaThread::exception_pc_offset()));
@@ -2287,6 +2288,7 @@ void SharedRuntime::generate_deopt_blob() {
__ sub(r2, r2, 2 * wordSize);
__ add(sp, sp, r2);
__ ldp(rfp, lr, __ post(sp, 2 * wordSize));
+ __ authenticate_return_address();
// LR should now be the return address to the caller (3)
#ifdef ASSERT
@@ -2428,6 +2430,7 @@ void SharedRuntime::generate_uncommon_trap_blob() {
// Push self-frame. We get here with a return address in LR
// and sp should be 16 byte aligned
// push rfp and retaddr by hand
+ __ protect_return_address();
__ stp(rfp, lr, Address(__ pre(sp, -2 * wordSize)));
// we don't expect an arg reg save area
#ifndef PRODUCT
@@ -2502,6 +2505,7 @@ void SharedRuntime::generate_uncommon_trap_blob() {
__ sub(r2, r2, 2 * wordSize);
__ add(sp, sp, r2);
__ ldp(rfp, lr, __ post(sp, 2 * wordSize));
+ __ authenticate_return_address();
// LR should now be the return address to the caller (3) frame
#ifdef ASSERT
@@ -2624,6 +2628,11 @@ SafepointBlob* SharedRuntime::generate_handler_blob(address call_ptr, int poll_t
bool cause_return = (poll_type == POLL_AT_RETURN);
RegisterSaver reg_save(poll_type == POLL_AT_VECTOR_LOOP /* save_vectors */);
+ // When the signal occured, the LR was either signed and stored on the stack (in which
+ // case it will be restored from the stack before being used) or unsigned and not stored
+ // on the stack. Stipping ensures we get the right value.
+ __ strip_return_address();
+
// Save Integer and Float registers.
map = reg_save.save_live_registers(masm, 0, &frame_size_in_words);
@@ -2643,6 +2652,7 @@ SafepointBlob* SharedRuntime::generate_handler_blob(address call_ptr, int poll_t
// it later to determine if someone changed the return address for
// us!
__ ldr(r20, Address(rthread, JavaThread::saved_exception_pc_offset()));
+ __ protect_return_address(r20, rscratch1);
__ str(r20, Address(rfp, wordSize));
}
@@ -2683,6 +2693,7 @@ SafepointBlob* SharedRuntime::generate_handler_blob(address call_ptr, int poll_t
__ ldr(rscratch1, Address(rfp, wordSize));
__ cmp(r20, rscratch1);
__ br(Assembler::NE, no_adjust);
+ __ authenticate_return_address(r20, rscratch1);
#ifdef ASSERT
// Verify the correct encoding of the poll we're about to skip.
@@ -2697,6 +2708,7 @@ SafepointBlob* SharedRuntime::generate_handler_blob(address call_ptr, int poll_t
#endif
// Adjust return pc forward to step over the safepoint poll instruction
__ add(r20, r20, NativeInstruction::instruction_size);
+ __ protect_return_address(r20, rscratch1);
__ str(r20, Address(rfp, wordSize));
}
@@ -2857,6 +2869,7 @@ void OptoRuntime::generate_exception_blob() {
// push rfp and retaddr by hand
// Exception pc is 'return address' for stack walker
+ __ protect_return_address();
__ stp(rfp, lr, Address(__ pre(sp, -2 * wordSize)));
// there are no callee save registers and we don't expect an
// arg reg save area
@@ -2910,6 +2923,7 @@ void OptoRuntime::generate_exception_blob() {
// there are no callee save registers now that adapter frames are gone.
// and we dont' expect an arg reg save area
__ ldp(rfp, r3, Address(__ post(sp, 2 * wordSize)));
+ __ authenticate_return_address(r3);
// r0: exception handler
diff --git a/src/hotspot/cpu/aarch64/stubGenerator_aarch64.cpp b/src/hotspot/cpu/aarch64/stubGenerator_aarch64.cpp
index a26c4a1597625ec1ceafbdcf12384def1b38cf22..1b41f09d97221799ac92ee3e430b10298193c663 100644
--- a/src/hotspot/cpu/aarch64/stubGenerator_aarch64.cpp
+++ b/src/hotspot/cpu/aarch64/stubGenerator_aarch64.cpp
@@ -26,6 +26,7 @@
#include "precompiled.hpp"
#include "asm/macroAssembler.hpp"
#include "asm/macroAssembler.inline.hpp"
+#include "asm/register.hpp"
#include "atomic_aarch64.hpp"
#include "compiler/oopMap.hpp"
#include "gc/shared/barrierSet.hpp"
@@ -1320,10 +1321,10 @@ class StubGenerator: public StubCodeGenerator {
void clobber_registers() {
#ifdef ASSERT
RegSet clobbered
- = MacroAssembler::call_clobbered_registers() - rscratch1;
+ = MacroAssembler::call_clobbered_gp_registers() - rscratch1;
__ mov(rscratch1, (uint64_t)0xdeadbeef);
__ orr(rscratch1, rscratch1, rscratch1, Assembler::LSL, 32);
- for (RegSetIterator<> it = clobbered.begin(); *it != noreg; ++it) {
+ for (RegSetIterator it = clobbered.begin(); *it != noreg; ++it) {
__ mov(*it, rscratch1);
}
#endif
@@ -4657,7 +4658,7 @@ class StubGenerator: public StubCodeGenerator {
return start;
}
- address generate_has_negatives(address &has_negatives_long) {
+ address generate_count_positives(address &count_positives_long) {
const u1 large_loop_size = 64;
const uint64_t UPPER_BIT_MASK=0x8080808080808080;
int dcache_line = VM_Version::dcache_line_size();
@@ -4666,13 +4667,15 @@ class StubGenerator: public StubCodeGenerator {
__ align(CodeEntryAlignment);
- StubCodeMark mark(this, "StubRoutines", "has_negatives");
+ StubCodeMark mark(this, "StubRoutines", "count_positives");
address entry = __ pc();
__ enter();
+ // precondition: a copy of len is already in result
+ // __ mov(result, len);
- Label RET_TRUE, RET_TRUE_NO_POP, RET_FALSE, ALIGNED, LOOP16, CHECK_16, DONE,
+ Label RET_ADJUST, RET_ADJUST_16, RET_ADJUST_LONG, RET_NO_POP, RET_LEN, ALIGNED, LOOP16, CHECK_16,
LARGE_LOOP, POST_LOOP16, LEN_OVER_15, LEN_OVER_8, POST_LOOP16_LOAD_TAIL;
__ cmp(len, (u1)15);
@@ -4686,25 +4689,26 @@ class StubGenerator: public StubCodeGenerator {
__ sub(rscratch1, zr, len, __ LSL, 3); // LSL 3 is to get bits from bytes.
__ lsrv(rscratch2, rscratch2, rscratch1);
__ tst(rscratch2, UPPER_BIT_MASK);
- __ cset(result, Assembler::NE);
+ __ csel(result, zr, result, Assembler::NE);
__ leave();
__ ret(lr);
__ bind(LEN_OVER_8);
__ ldp(rscratch1, rscratch2, Address(ary1, -16));
__ sub(len, len, 8); // no data dep., then sub can be executed while loading
__ tst(rscratch2, UPPER_BIT_MASK);
- __ br(Assembler::NE, RET_TRUE_NO_POP);
+ __ br(Assembler::NE, RET_NO_POP);
__ sub(rscratch2, zr, len, __ LSL, 3); // LSL 3 is to get bits from bytes
__ lsrv(rscratch1, rscratch1, rscratch2);
__ tst(rscratch1, UPPER_BIT_MASK);
- __ cset(result, Assembler::NE);
+ __ bind(RET_NO_POP);
+ __ csel(result, zr, result, Assembler::NE);
__ leave();
__ ret(lr);
Register tmp1 = r3, tmp2 = r4, tmp3 = r5, tmp4 = r6, tmp5 = r7, tmp6 = r10;
const RegSet spilled_regs = RegSet::range(tmp1, tmp5) + tmp6;
- has_negatives_long = __ pc(); // 2nd entry point
+ count_positives_long = __ pc(); // 2nd entry point
__ enter();
@@ -4716,10 +4720,10 @@ class StubGenerator: public StubCodeGenerator {
__ mov(tmp5, 16);
__ sub(rscratch1, tmp5, rscratch2); // amount of bytes until aligned address
__ add(ary1, ary1, rscratch1);
- __ sub(len, len, rscratch1);
__ orr(tmp6, tmp6, tmp1);
__ tst(tmp6, UPPER_BIT_MASK);
- __ br(Assembler::NE, RET_TRUE);
+ __ br(Assembler::NE, RET_ADJUST);
+ __ sub(len, len, rscratch1);
__ bind(ALIGNED);
__ cmp(len, large_loop_size);
@@ -4734,7 +4738,7 @@ class StubGenerator: public StubCodeGenerator {
__ sub(len, len, 16);
__ orr(tmp6, tmp6, tmp1);
__ tst(tmp6, UPPER_BIT_MASK);
- __ br(Assembler::NE, RET_TRUE);
+ __ br(Assembler::NE, RET_ADJUST_16);
__ cmp(len, large_loop_size);
__ br(Assembler::LT, CHECK_16);
@@ -4766,7 +4770,7 @@ class StubGenerator: public StubCodeGenerator {
__ orr(rscratch1, rscratch1, tmp6);
__ orr(tmp2, tmp2, rscratch1);
__ tst(tmp2, UPPER_BIT_MASK);
- __ br(Assembler::NE, RET_TRUE);
+ __ br(Assembler::NE, RET_ADJUST_LONG);
__ cmp(len, large_loop_size);
__ br(Assembler::GE, LARGE_LOOP);
@@ -4779,7 +4783,7 @@ class StubGenerator: public StubCodeGenerator {
__ sub(len, len, 16);
__ orr(tmp2, tmp2, tmp3);
__ tst(tmp2, UPPER_BIT_MASK);
- __ br(Assembler::NE, RET_TRUE);
+ __ br(Assembler::NE, RET_ADJUST_16);
__ cmp(len, (u1)16);
__ br(Assembler::GE, LOOP16); // 16-byte load loop end
@@ -4787,37 +4791,38 @@ class StubGenerator: public StubCodeGenerator {
__ cmp(len, (u1)8);
__ br(Assembler::LE, POST_LOOP16_LOAD_TAIL);
__ ldr(tmp3, Address(__ post(ary1, 8)));
- __ sub(len, len, 8);
__ tst(tmp3, UPPER_BIT_MASK);
- __ br(Assembler::NE, RET_TRUE);
+ __ br(Assembler::NE, RET_ADJUST);
+ __ sub(len, len, 8);
__ bind(POST_LOOP16_LOAD_TAIL);
- __ cbz(len, RET_FALSE); // Can't shift left by 64 when len==0
+ __ cbz(len, RET_LEN); // Can't shift left by 64 when len==0
__ ldr(tmp1, Address(ary1));
__ mov(tmp2, 64);
__ sub(tmp4, tmp2, len, __ LSL, 3);
__ lslv(tmp1, tmp1, tmp4);
__ tst(tmp1, UPPER_BIT_MASK);
- __ br(Assembler::NE, RET_TRUE);
+ __ br(Assembler::NE, RET_ADJUST);
// Fallthrough
- __ bind(RET_FALSE);
+ __ bind(RET_LEN);
__ pop(spilled_regs, sp);
__ leave();
- __ mov(result, zr);
__ ret(lr);
- __ bind(RET_TRUE);
- __ pop(spilled_regs, sp);
- __ bind(RET_TRUE_NO_POP);
- __ leave();
- __ mov(result, 1);
- __ ret(lr);
+ // difference result - len is the count of guaranteed to be
+ // positive bytes
- __ bind(DONE);
+ __ bind(RET_ADJUST_LONG);
+ __ add(len, len, (u1)(large_loop_size - 16));
+ __ bind(RET_ADJUST_16);
+ __ add(len, len, 16);
+ __ bind(RET_ADJUST);
__ pop(spilled_regs, sp);
__ leave();
+ __ sub(result, result, len);
__ ret(lr);
+
return entry;
}
@@ -6625,7 +6630,7 @@ class StubGenerator: public StubCodeGenerator {
// Register allocation
- RegSetIterator<> regs = (RegSet::range(r0, r26) - r18_tls).begin();
+ RegSetIterator regs = (RegSet::range(r0, r26) - r18_tls).begin();
Pa_base = *regs; // Argument registers
if (squaring)
Pb_base = Pa_base;
@@ -7519,8 +7524,8 @@ class StubGenerator: public StubCodeGenerator {
// arraycopy stubs used by compilers
generate_arraycopy_stubs();
- // has negatives stub for large arrays.
- StubRoutines::aarch64::_has_negatives = generate_has_negatives(StubRoutines::aarch64::_has_negatives_long);
+ // countPositives stub for large arrays.
+ StubRoutines::aarch64::_count_positives = generate_count_positives(StubRoutines::aarch64::_count_positives_long);
// array equals stub for large arrays.
if (!UseSimpleArrayEquals) {
diff --git a/src/hotspot/cpu/aarch64/stubRoutines_aarch64.cpp b/src/hotspot/cpu/aarch64/stubRoutines_aarch64.cpp
index 9e16a1f9f8812268779be2caad8e459c123816ae..2e1c4b695425274c4f41223cf36e1ded7507dfa2 100644
--- a/src/hotspot/cpu/aarch64/stubRoutines_aarch64.cpp
+++ b/src/hotspot/cpu/aarch64/stubRoutines_aarch64.cpp
@@ -45,8 +45,8 @@ address StubRoutines::aarch64::_float_sign_flip = NULL;
address StubRoutines::aarch64::_double_sign_mask = NULL;
address StubRoutines::aarch64::_double_sign_flip = NULL;
address StubRoutines::aarch64::_zero_blocks = NULL;
-address StubRoutines::aarch64::_has_negatives = NULL;
-address StubRoutines::aarch64::_has_negatives_long = NULL;
+address StubRoutines::aarch64::_count_positives = NULL;
+address StubRoutines::aarch64::_count_positives_long = NULL;
address StubRoutines::aarch64::_large_array_equals = NULL;
address StubRoutines::aarch64::_compare_long_string_LL = NULL;
address StubRoutines::aarch64::_compare_long_string_UU = NULL;
diff --git a/src/hotspot/cpu/aarch64/stubRoutines_aarch64.hpp b/src/hotspot/cpu/aarch64/stubRoutines_aarch64.hpp
index 295264b7aaf22067503fcbc7fb3db2cab7222870..a17e7540e42d24eac8f2db642c7d1f0c407b75ba 100644
--- a/src/hotspot/cpu/aarch64/stubRoutines_aarch64.hpp
+++ b/src/hotspot/cpu/aarch64/stubRoutines_aarch64.hpp
@@ -58,8 +58,6 @@ class aarch64 {
static address _zero_blocks;
- static address _has_negatives;
- static address _has_negatives_long;
static address _large_array_equals;
static address _compare_long_string_LL;
static address _compare_long_string_LU;
@@ -78,6 +76,9 @@ class aarch64 {
public:
+ static address _count_positives;
+ static address _count_positives_long;
+
static address get_previous_sp_entry()
{
return _get_previous_sp_entry;
@@ -131,12 +132,12 @@ class aarch64 {
return _zero_blocks;
}
- static address has_negatives() {
- return _has_negatives;
+ static address count_positives() {
+ return _count_positives;
}
- static address has_negatives_long() {
- return _has_negatives_long;
+ static address count_positives_long() {
+ return _count_positives_long;
}
static address large_array_equals() {
diff --git a/src/hotspot/cpu/aarch64/templateInterpreterGenerator_aarch64.cpp b/src/hotspot/cpu/aarch64/templateInterpreterGenerator_aarch64.cpp
index e20cffd57670b49e7ee7372a5432f1a0c1249e2b..bf0a4e4472927e33b12b18c2ea8a310b665c3758 100644
--- a/src/hotspot/cpu/aarch64/templateInterpreterGenerator_aarch64.cpp
+++ b/src/hotspot/cpu/aarch64/templateInterpreterGenerator_aarch64.cpp
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2003, 2021, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2003, 2022, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2014, 2020, Red Hat Inc. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
@@ -832,6 +832,7 @@ void TemplateInterpreterGenerator::generate_fixed_frame(bool native_call) {
__ ldr(rcpool, Address(rcpool, ConstantPool::cache_offset_in_bytes()));
__ stp(rlocals, rcpool, Address(sp, 2 * wordSize));
+ __ protect_return_address();
__ stp(rfp, lr, Address(sp, 10 * wordSize));
__ lea(rfp, Address(sp, 10 * wordSize));
@@ -1748,6 +1749,8 @@ void TemplateInterpreterGenerator::generate_throw_exception() {
// adapter frames in C2.
Label caller_not_deoptimized;
__ ldr(c_rarg1, Address(rfp, frame::return_addr_offset * wordSize));
+ // This is a return address, so requires authenticating for PAC.
+ __ authenticate_return_address(c_rarg1, rscratch1);
__ super_call_VM_leaf(CAST_FROM_FN_PTR(address,
InterpreterRuntime::interpreter_contains), c_rarg1);
__ cbnz(r0, caller_not_deoptimized);
@@ -1937,6 +1940,7 @@ void TemplateInterpreterGenerator::set_vtos_entry_points(Template* t,
address TemplateInterpreterGenerator::generate_trace_code(TosState state) {
address entry = __ pc();
+ __ protect_return_address();
__ push(lr);
__ push(state);
__ push(RegSet::range(r0, r15), sp);
@@ -1947,6 +1951,7 @@ address TemplateInterpreterGenerator::generate_trace_code(TosState state) {
__ pop(RegSet::range(r0, r15), sp);
__ pop(state);
__ pop(lr);
+ __ authenticate_return_address();
__ ret(lr); // return from result handler
return entry;
diff --git a/src/hotspot/cpu/aarch64/vm_version_aarch64.cpp b/src/hotspot/cpu/aarch64/vm_version_aarch64.cpp
index b0c0c64f6d93b52b08384a1e01c0845c6861f71c..d2a573ac63bd6880060d33f45cd095e084061b4b 100644
--- a/src/hotspot/cpu/aarch64/vm_version_aarch64.cpp
+++ b/src/hotspot/cpu/aarch64/vm_version_aarch64.cpp
@@ -45,6 +45,7 @@ int VM_Version::_zva_length;
int VM_Version::_dcache_line_size;
int VM_Version::_icache_line_size;
int VM_Version::_initial_sve_vector_length;
+bool VM_Version::_rop_protection;
SpinWait VM_Version::_spin_wait;
@@ -409,6 +410,39 @@ void VM_Version::initialize() {
UsePopCountInstruction = true;
}
+ if (UseBranchProtection == nullptr || strcmp(UseBranchProtection, "none") == 0) {
+ _rop_protection = false;
+ } else if (strcmp(UseBranchProtection, "standard") == 0) {
+ _rop_protection = false;
+ // Enable PAC if this code has been built with branch-protection and the CPU/OS supports it.
+#ifdef __ARM_FEATURE_PAC_DEFAULT
+ if ((_features & CPU_PACA) != 0) {
+ _rop_protection = true;
+ }
+#endif
+ } else if (strcmp(UseBranchProtection, "pac-ret") == 0) {
+ _rop_protection = true;
+#ifdef __ARM_FEATURE_PAC_DEFAULT
+ if ((_features & CPU_PACA) == 0) {
+ warning("ROP-protection specified, but not supported on this CPU.");
+ // Disable PAC to prevent illegal instruction crashes.
+ _rop_protection = false;
+ }
+#else
+ warning("ROP-protection specified, but this VM was built without ROP-protection support.");
+#endif
+ } else {
+ vm_exit_during_initialization(err_msg("Unsupported UseBranchProtection: %s", UseBranchProtection));
+ }
+
+ // The frame pointer must be preserved for ROP protection.
+ if (_rop_protection == true) {
+ if (FLAG_IS_DEFAULT(PreserveFramePointer) == false && PreserveFramePointer == false ) {
+ vm_exit_during_initialization(err_msg("PreserveFramePointer cannot be disabled for ROP-protection"));
+ }
+ PreserveFramePointer = true;
+ }
+
#ifdef COMPILER2
if (FLAG_IS_DEFAULT(UseMultiplyToLenIntrinsic)) {
UseMultiplyToLenIntrinsic = true;
diff --git a/src/hotspot/cpu/aarch64/vm_version_aarch64.hpp b/src/hotspot/cpu/aarch64/vm_version_aarch64.hpp
index b6aec7ed01f985c97b009fb2dde99b73df3b7af8..e979f62b926c78c927f59b716c4d8fcff1f380a1 100644
--- a/src/hotspot/cpu/aarch64/vm_version_aarch64.hpp
+++ b/src/hotspot/cpu/aarch64/vm_version_aarch64.hpp
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 1997, 2021, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 1997, 2022, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2014, 2020, Red Hat Inc. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
@@ -45,6 +45,7 @@ protected:
static int _dcache_line_size;
static int _icache_line_size;
static int _initial_sve_vector_length;
+ static bool _rop_protection;
static SpinWait _spin_wait;
@@ -114,10 +115,11 @@ public:
decl(SHA3, "sha3", 17) \
decl(SHA512, "sha512", 21) \
decl(SVE, "sve", 22) \
+ decl(PACA, "paca", 30) \
/* flags above must follow Linux HWCAP */ \
decl(SVE2, "sve2", 28) \
decl(STXR_PREFETCH, "stxr_prefetch", 29) \
- decl(A53MAC, "a53mac", 30)
+ decl(A53MAC, "a53mac", 31)
#define DECLARE_CPU_FEATURE_FLAG(id, name, bit) CPU_##id = (1 << bit),
CPU_FEATURE_FLAGS(DECLARE_CPU_FEATURE_FLAG)
@@ -156,6 +158,7 @@ public:
static void initialize_cpu_information(void);
+ static bool use_rop_protection() { return _rop_protection; }
};
#endif // CPU_AARCH64_VM_VERSION_AARCH64_HPP
diff --git a/src/hotspot/cpu/arm/frame_arm.inline.hpp b/src/hotspot/cpu/arm/frame_arm.inline.hpp
index 835edd68493e66e82345eb848d304c0202e27b79..773b6d06f7b4c426052eb7b2d357730c78bc3f20 100644
--- a/src/hotspot/cpu/arm/frame_arm.inline.hpp
+++ b/src/hotspot/cpu/arm/frame_arm.inline.hpp
@@ -124,9 +124,13 @@ inline intptr_t* frame::id(void) const { return unextended_sp(); }
inline bool frame::is_older(intptr_t* id) const { assert(this->id() != NULL && id != NULL, "NULL frame id");
return this->id() > id ; }
-
inline intptr_t* frame::link() const { return (intptr_t*) *(intptr_t **)addr_at(link_offset); }
+inline intptr_t* frame::link_or_null() const {
+ intptr_t** ptr = (intptr_t **)addr_at(link_offset);
+ return os::is_readable_pointer(ptr) ? *ptr : NULL;
+}
+
inline intptr_t* frame::unextended_sp() const { return _unextended_sp; }
// Return address:
diff --git a/src/hotspot/cpu/arm/matcher_arm.hpp b/src/hotspot/cpu/arm/matcher_arm.hpp
index 7552b014c061eaba521fc77f472cbf3fe76a8722..496ea27c0861b0d1a8ae1ca2dbc712bc40e48467 100644
--- a/src/hotspot/cpu/arm/matcher_arm.hpp
+++ b/src/hotspot/cpu/arm/matcher_arm.hpp
@@ -155,4 +155,9 @@
// Implements a variant of EncodeISOArrayNode that encode ASCII only
static const bool supports_encode_ascii_array = false;
+ // Returns pre-selection estimated cost of a vector operation.
+ static int vector_op_pre_select_sz_estimate(int vopc, BasicType ety, int vlen) {
+ return 0;
+ }
+
#endif // CPU_ARM_MATCHER_ARM_HPP
diff --git a/src/hotspot/cpu/arm/register_definitions_arm.cpp b/src/hotspot/cpu/arm/register_definitions_arm.cpp
deleted file mode 100644
index 4aa7714970cbb1e0ebe4ed38983c56cf4a7bff72..0000000000000000000000000000000000000000
--- a/src/hotspot/cpu/arm/register_definitions_arm.cpp
+++ /dev/null
@@ -1,100 +0,0 @@
-/*
- * Copyright (c) 2008, 2016, Oracle and/or its affiliates. All rights reserved.
- * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
- *
- * This code is free software; you can redistribute it and/or modify it
- * under the terms of the GNU General Public License version 2 only, as
- * published by the Free Software Foundation.
- *
- * This code is distributed in the hope that it will be useful, but WITHOUT
- * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
- * version 2 for more details (a copy is included in the LICENSE file that
- * accompanied this code).
- *
- * You should have received a copy of the GNU General Public License version
- * 2 along with this work; if not, write to the Free Software Foundation,
- * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
- *
- * Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA
- * or visit www.oracle.com if you need additional information or have any
- * questions.
- *
- */
-
-#include "precompiled.hpp"
-#include "asm/assembler.hpp"
-#include "asm/register.hpp"
-#include "interp_masm_arm.hpp"
-#include "register_arm.hpp"
-
-REGISTER_DEFINITION(Register, noreg);
-REGISTER_DEFINITION(FloatRegister, fnoreg);
-
-
-REGISTER_DEFINITION(FloatRegister, S0);
-REGISTER_DEFINITION(FloatRegister, S1_reg);
-REGISTER_DEFINITION(FloatRegister, S2_reg);
-REGISTER_DEFINITION(FloatRegister, S3_reg);
-REGISTER_DEFINITION(FloatRegister, S4_reg);
-REGISTER_DEFINITION(FloatRegister, S5_reg);
-REGISTER_DEFINITION(FloatRegister, S6_reg);
-REGISTER_DEFINITION(FloatRegister, S7);
-REGISTER_DEFINITION(FloatRegister, S8);
-REGISTER_DEFINITION(FloatRegister, S9);
-REGISTER_DEFINITION(FloatRegister, S10);
-REGISTER_DEFINITION(FloatRegister, S11);
-REGISTER_DEFINITION(FloatRegister, S12);
-REGISTER_DEFINITION(FloatRegister, S13);
-REGISTER_DEFINITION(FloatRegister, S14);
-REGISTER_DEFINITION(FloatRegister, S15);
-REGISTER_DEFINITION(FloatRegister, S16);
-REGISTER_DEFINITION(FloatRegister, S17);
-REGISTER_DEFINITION(FloatRegister, S18);
-REGISTER_DEFINITION(FloatRegister, S19);
-REGISTER_DEFINITION(FloatRegister, S20);
-REGISTER_DEFINITION(FloatRegister, S21);
-REGISTER_DEFINITION(FloatRegister, S22);
-REGISTER_DEFINITION(FloatRegister, S23);
-REGISTER_DEFINITION(FloatRegister, S24);
-REGISTER_DEFINITION(FloatRegister, S25);
-REGISTER_DEFINITION(FloatRegister, S26);
-REGISTER_DEFINITION(FloatRegister, S27);
-REGISTER_DEFINITION(FloatRegister, S28);
-REGISTER_DEFINITION(FloatRegister, S29);
-REGISTER_DEFINITION(FloatRegister, S30);
-REGISTER_DEFINITION(FloatRegister, S31);
-REGISTER_DEFINITION(FloatRegister, Stemp);
-REGISTER_DEFINITION(FloatRegister, D0);
-REGISTER_DEFINITION(FloatRegister, D1);
-REGISTER_DEFINITION(FloatRegister, D2);
-REGISTER_DEFINITION(FloatRegister, D3);
-REGISTER_DEFINITION(FloatRegister, D4);
-REGISTER_DEFINITION(FloatRegister, D5);
-REGISTER_DEFINITION(FloatRegister, D6);
-REGISTER_DEFINITION(FloatRegister, D7);
-REGISTER_DEFINITION(FloatRegister, D8);
-REGISTER_DEFINITION(FloatRegister, D9);
-REGISTER_DEFINITION(FloatRegister, D10);
-REGISTER_DEFINITION(FloatRegister, D11);
-REGISTER_DEFINITION(FloatRegister, D12);
-REGISTER_DEFINITION(FloatRegister, D13);
-REGISTER_DEFINITION(FloatRegister, D14);
-REGISTER_DEFINITION(FloatRegister, D15);
-REGISTER_DEFINITION(FloatRegister, D16);
-REGISTER_DEFINITION(FloatRegister, D17);
-REGISTER_DEFINITION(FloatRegister, D18);
-REGISTER_DEFINITION(FloatRegister, D19);
-REGISTER_DEFINITION(FloatRegister, D20);
-REGISTER_DEFINITION(FloatRegister, D21);
-REGISTER_DEFINITION(FloatRegister, D22);
-REGISTER_DEFINITION(FloatRegister, D23);
-REGISTER_DEFINITION(FloatRegister, D24);
-REGISTER_DEFINITION(FloatRegister, D25);
-REGISTER_DEFINITION(FloatRegister, D26);
-REGISTER_DEFINITION(FloatRegister, D27);
-REGISTER_DEFINITION(FloatRegister, D28);
-REGISTER_DEFINITION(FloatRegister, D29);
-REGISTER_DEFINITION(FloatRegister, D30);
-REGISTER_DEFINITION(FloatRegister, D31);
-
diff --git a/src/hotspot/cpu/ppc/c2_MacroAssembler_ppc.cpp b/src/hotspot/cpu/ppc/c2_MacroAssembler_ppc.cpp
index 1ae5c13e62e97e0157ba8367b69369b7d3991ee0..30104094983b579eb68c5a959958fdd9ae0af5ba 100644
--- a/src/hotspot/cpu/ppc/c2_MacroAssembler_ppc.cpp
+++ b/src/hotspot/cpu/ppc/c2_MacroAssembler_ppc.cpp
@@ -565,16 +565,16 @@ void C2_MacroAssembler::string_indexof_char(Register result, Register haystack,
} // string_indexof_char
-void C2_MacroAssembler::has_negatives(Register src, Register cnt, Register result,
- Register tmp1, Register tmp2) {
+void C2_MacroAssembler::count_positives(Register src, Register cnt, Register result,
+ Register tmp1, Register tmp2) {
const Register tmp0 = R0;
assert_different_registers(src, result, cnt, tmp0, tmp1, tmp2);
- Label Lfastloop, Lslow, Lloop, Lnoneg, Ldone;
+ Label Lfastloop, Lslow, Lloop, Ldone;
// Check if cnt >= 8 (= 16 bytes)
lis(tmp1, (int)(short)0x8080); // tmp1 = 0x8080808080808080
srwi_(tmp2, cnt, 4);
- li(result, 1); // Assume there's a negative byte.
+ mr(result, src); // Use result reg to point to the current position.
beq(CCR0, Lslow);
ori(tmp1, tmp1, 0x8080);
rldimi(tmp1, tmp1, 32, 0);
@@ -582,30 +582,28 @@ void C2_MacroAssembler::has_negatives(Register src, Register cnt, Register resul
// 2x unrolled loop
bind(Lfastloop);
- ld(tmp2, 0, src);
- ld(tmp0, 8, src);
+ ld(tmp2, 0, result);
+ ld(tmp0, 8, result);
orr(tmp0, tmp2, tmp0);
and_(tmp0, tmp0, tmp1);
- bne(CCR0, Ldone); // Found negative byte.
- addi(src, src, 16);
-
+ bne(CCR0, Lslow); // Found negative byte.
+ addi(result, result, 16);
bdnz(Lfastloop);
- bind(Lslow); // Fallback to slow version
- rldicl_(tmp0, cnt, 0, 64-4);
- beq(CCR0, Lnoneg);
+ bind(Lslow); // Fallback to slow version.
+ subf(tmp0, src, result); // Bytes known positive.
+ subf_(tmp0, tmp0, cnt); // Remaining Bytes.
+ beq(CCR0, Ldone);
mtctr(tmp0);
bind(Lloop);
- lbz(tmp0, 0, src);
- addi(src, src, 1);
+ lbz(tmp0, 0, result);
andi_(tmp0, tmp0, 0x80);
bne(CCR0, Ldone); // Found negative byte.
+ addi(result, result, 1);
bdnz(Lloop);
- bind(Lnoneg);
- li(result, 0);
bind(Ldone);
+ subf(result, src, result); // Result is offset from src.
}
-
diff --git a/src/hotspot/cpu/ppc/c2_MacroAssembler_ppc.hpp b/src/hotspot/cpu/ppc/c2_MacroAssembler_ppc.hpp
index 9c4576f2eaf043f0be28ef27f56c348cae1d5de7..ef4840b08a256c61d1add926f43eb255c9551e8c 100644
--- a/src/hotspot/cpu/ppc/c2_MacroAssembler_ppc.hpp
+++ b/src/hotspot/cpu/ppc/c2_MacroAssembler_ppc.hpp
@@ -63,6 +63,6 @@
void string_indexof_char(Register result, Register haystack, Register haycnt,
Register needle, jchar needleChar, Register tmp1, Register tmp2, bool is_byte);
- void has_negatives(Register src, Register cnt, Register result, Register tmp1, Register tmp2);
+ void count_positives(Register src, Register cnt, Register result, Register tmp1, Register tmp2);
#endif // CPU_PPC_C2_MACROASSEMBLER_PPC_HPP
diff --git a/src/hotspot/cpu/ppc/copy_ppc.hpp b/src/hotspot/cpu/ppc/copy_ppc.hpp
index 06eae3c3f098054ecfae68a12ef6b1944aa21f13..0ae84b4e5959553042defa0ab7e203703831752d 100644
--- a/src/hotspot/cpu/ppc/copy_ppc.hpp
+++ b/src/hotspot/cpu/ppc/copy_ppc.hpp
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2000, 2019, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2000, 2022, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2012, 2013 SAP SE. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
@@ -53,21 +53,7 @@ static void pd_disjoint_words(const HeapWord* from, HeapWord* to, size_t count)
}
static void pd_disjoint_words_atomic(const HeapWord* from, HeapWord* to, size_t count) {
- switch (count) {
- case 8: to[7] = from[7];
- case 7: to[6] = from[6];
- case 6: to[5] = from[5];
- case 5: to[4] = from[4];
- case 4: to[3] = from[3];
- case 3: to[2] = from[2];
- case 2: to[1] = from[1];
- case 1: to[0] = from[0];
- case 0: break;
- default: while (count-- > 0) {
- *to++ = *from++;
- }
- break;
- }
+ shared_disjoint_words_atomic(from, to, count);
}
static void pd_aligned_conjoint_words(const HeapWord* from, HeapWord* to, size_t count) {
diff --git a/src/hotspot/cpu/ppc/frame_ppc.inline.hpp b/src/hotspot/cpu/ppc/frame_ppc.inline.hpp
index 705b4abefdbd77ead3ef07ee0aca064df1d8aeaa..239db8224c0c38f811ca717d08e09316b0c7fce3 100644
--- a/src/hotspot/cpu/ppc/frame_ppc.inline.hpp
+++ b/src/hotspot/cpu/ppc/frame_ppc.inline.hpp
@@ -117,6 +117,10 @@ inline intptr_t* frame::link() const {
return (intptr_t*)callers_abi()->callers_sp;
}
+inline intptr_t* frame::link_or_null() const {
+ return link();
+}
+
inline intptr_t* frame::real_fp() const {
return fp();
}
diff --git a/src/hotspot/cpu/ppc/matcher_ppc.hpp b/src/hotspot/cpu/ppc/matcher_ppc.hpp
index df2074dee24001b286dc90d0f27392cb7ec2d9f5..069c40485fea5adbc64851b83a62dd2a9cca1b50 100644
--- a/src/hotspot/cpu/ppc/matcher_ppc.hpp
+++ b/src/hotspot/cpu/ppc/matcher_ppc.hpp
@@ -164,4 +164,10 @@
// Implements a variant of EncodeISOArrayNode that encode ASCII only
static const bool supports_encode_ascii_array = true;
+ // Returns pre-selection estimated cost of a vector operation.
+ static int vector_op_pre_select_sz_estimate(int vopc, BasicType ety, int vlen) {
+ return 0;
+ }
+
+
#endif // CPU_PPC_MATCHER_PPC_HPP
diff --git a/src/hotspot/cpu/ppc/ppc.ad b/src/hotspot/cpu/ppc/ppc.ad
index 832982deb4552f1477017ba7ab174dacd72c1e47..b41c72ab449589687e5e1aed564ab56685f5aef4 100644
--- a/src/hotspot/cpu/ppc/ppc.ad
+++ b/src/hotspot/cpu/ppc/ppc.ad
@@ -12779,16 +12779,16 @@ instruct string_inflate(Universe dummy, rarg1RegP src, rarg2RegP dst, iRegIsrc l
%}
// StringCoding.java intrinsics
-instruct has_negatives(rarg1RegP ary1, iRegIsrc len, iRegIdst result, iRegLdst tmp1, iRegLdst tmp2,
- regCTR ctr, flagsRegCR0 cr0)
+instruct count_positives(iRegPsrc ary1, iRegIsrc len, iRegIdst result, iRegLdst tmp1, iRegLdst tmp2,
+ regCTR ctr, flagsRegCR0 cr0)
%{
- match(Set result (HasNegatives ary1 len));
- effect(TEMP_DEF result, USE_KILL ary1, TEMP tmp1, TEMP tmp2, KILL ctr, KILL cr0);
+ match(Set result (CountPositives ary1 len));
+ effect(TEMP_DEF result, TEMP tmp1, TEMP tmp2, KILL ctr, KILL cr0);
ins_cost(300);
- format %{ "has negatives byte[] $ary1,$len -> $result \t// KILL $tmp1, $tmp2" %}
+ format %{ "count positives byte[] $ary1,$len -> $result \t// KILL $tmp1, $tmp2" %}
ins_encode %{
- __ has_negatives($ary1$$Register, $len$$Register, $result$$Register,
- $tmp1$$Register, $tmp2$$Register);
+ __ count_positives($ary1$$Register, $len$$Register, $result$$Register,
+ $tmp1$$Register, $tmp2$$Register);
%}
ins_pipe(pipe_class_default);
%}
diff --git a/src/hotspot/cpu/s390/c2_MacroAssembler_s390.cpp b/src/hotspot/cpu/s390/c2_MacroAssembler_s390.cpp
index 04a6b88052c99cf11b4397061af77f5abd28888a..6fac285f738ace567bf016f244d59e68031db260 100644
--- a/src/hotspot/cpu/s390/c2_MacroAssembler_s390.cpp
+++ b/src/hotspot/cpu/s390/c2_MacroAssembler_s390.cpp
@@ -823,52 +823,64 @@ unsigned int C2_MacroAssembler::string_inflate_const(Register src, Register dst,
return offset() - block_start;
}
-// Kills src.
-unsigned int C2_MacroAssembler::has_negatives(Register result, Register src, Register cnt,
- Register odd_reg, Register even_reg, Register tmp) {
- int block_start = offset();
- Label Lloop1, Lloop2, Lslow, Lnotfound, Ldone;
- const Register addr = src, mask = tmp;
-
- BLOCK_COMMENT("has_negatives {");
-
- z_llgfr(Z_R1, cnt); // Number of bytes to read. (Must be a positive simm32.)
- z_llilf(mask, 0x80808080);
- z_lhi(result, 1); // Assume true.
- // Last possible addr for fast loop.
- z_lay(odd_reg, -16, Z_R1, src);
- z_chi(cnt, 16);
- z_brl(Lslow);
-
- // ind1: index, even_reg: index increment, odd_reg: index limit
- z_iihf(mask, 0x80808080);
- z_lghi(even_reg, 16);
-
- bind(Lloop1); // 16 bytes per iteration.
- z_lg(Z_R0, Address(addr));
- z_lg(Z_R1, Address(addr, 8));
- z_ogr(Z_R0, Z_R1);
- z_ngr(Z_R0, mask);
- z_brne(Ldone); // If found return 1.
- z_brxlg(addr, even_reg, Lloop1);
-
- bind(Lslow);
- z_aghi(odd_reg, 16-1); // Last possible addr for slow loop.
- z_lghi(even_reg, 1);
- z_cgr(addr, odd_reg);
- z_brh(Lnotfound);
-
- bind(Lloop2); // 1 byte per iteration.
- z_cli(Address(addr), 0x80);
- z_brnl(Ldone); // If found return 1.
- z_brxlg(addr, even_reg, Lloop2);
-
- bind(Lnotfound);
- z_lhi(result, 0);
-
- bind(Ldone);
-
- BLOCK_COMMENT("} has_negatives");
+// Returns the number of non-negative bytes (aka US-ASCII characters) found
+// before the first negative byte is encountered.
+unsigned int C2_MacroAssembler::count_positives(Register result, Register src, Register cnt, Register tmp) {
+ const unsigned int block_start = offset();
+ const unsigned int byte_mask = 0x80;
+ const unsigned int twobyte_mask = byte_mask<<8 | byte_mask;
+ const unsigned int unroll_factor = 16;
+ const unsigned int log_unroll_factor = exact_log2(unroll_factor);
+ Register pos = src; // current position in src array, restored at end
+ Register ctr = result; // loop counter, result value
+ Register mask = tmp; // holds the sign detection mask
+ Label unrolledLoop, unrolledDone, byteLoop, allDone;
+
+ assert_different_registers(result, src, cnt, tmp);
+
+ BLOCK_COMMENT("count_positives {");
+
+ lgr_if_needed(pos, src); // current position in src array
+ z_srak(ctr, cnt, log_unroll_factor); // # iterations of unrolled loop
+ z_brnh(unrolledDone); // array too short for unrolled loop
+
+ z_iilf(mask, twobyte_mask<<16 | twobyte_mask);
+ z_iihf(mask, twobyte_mask<<16 | twobyte_mask);
+
+ bind(unrolledLoop);
+ z_lmg(Z_R0, Z_R1, 0, pos);
+ z_ogr(Z_R0, Z_R1);
+ z_ngr(Z_R0, mask);
+ z_brne(unrolledDone); // There is a negative byte somewhere.
+ // ctr and pos are not updated yet ->
+ // delegate finding correct pos to byteLoop.
+ add2reg(pos, unroll_factor);
+ z_brct(ctr, unrolledLoop);
+
+ // Once we arrive here, we have to examine at most (unroll_factor - 1) bytes more.
+ // We then either have reached the end of the array or we hit a negative byte.
+ bind(unrolledDone);
+ z_sll(ctr, log_unroll_factor); // calculate # bytes not processed by unrolled loop
+ // > 0 only if a negative byte was found
+ z_lr(Z_R0, cnt); // calculate remainder bytes
+ z_nilf(Z_R0, unroll_factor - 1);
+ z_ar(ctr, Z_R0); // remaining bytes
+ z_brnh(allDone); // shortcut if nothing left to do
+
+ bind(byteLoop);
+ z_cli(0, pos, byte_mask); // unsigned comparison! byte@pos must be smaller that byte_mask
+ z_brnl(allDone); // negative byte found.
+
+ add2reg(pos, 1);
+ z_brct(ctr, byteLoop);
+
+ bind(allDone);
+
+ z_srk(ctr, cnt, ctr); // # bytes actually processed (= cnt or index of first negative byte)
+ z_sgfr(pos, ctr); // restore src
+ z_lgfr(result, ctr); // unnecessary. Only there to be sure the high word has a defined state.
+
+ BLOCK_COMMENT("} count_positives");
return offset() - block_start;
}
diff --git a/src/hotspot/cpu/s390/c2_MacroAssembler_s390.hpp b/src/hotspot/cpu/s390/c2_MacroAssembler_s390.hpp
index a6c9865649522d807d91c3255daa05e9c04ad865..a502e41ee08ee12ca3f4af48dbfe7b37a59a5b4f 100644
--- a/src/hotspot/cpu/s390/c2_MacroAssembler_s390.hpp
+++ b/src/hotspot/cpu/s390/c2_MacroAssembler_s390.hpp
@@ -57,9 +57,7 @@
// len is signed int. Counts # characters, not bytes.
unsigned int string_inflate_const(Register src, Register dst, Register tmp, int len);
- // Kills src.
- unsigned int has_negatives(Register result, Register src, Register cnt,
- Register odd_reg, Register even_reg, Register tmp);
+ unsigned int count_positives(Register result, Register src, Register cnt, Register tmp);
unsigned int string_compare(Register str1, Register str2, Register cnt1, Register cnt2,
Register odd_reg, Register even_reg, Register result, int ae);
diff --git a/src/hotspot/cpu/s390/frame_s390.inline.hpp b/src/hotspot/cpu/s390/frame_s390.inline.hpp
index d8a4395d8cad82800bda84422c6621b6c339c3f3..5574e6384e2218d99dfc7d26a6b6f303e193230c 100644
--- a/src/hotspot/cpu/s390/frame_s390.inline.hpp
+++ b/src/hotspot/cpu/s390/frame_s390.inline.hpp
@@ -155,6 +155,10 @@ inline intptr_t* frame::link() const {
return (intptr_t*) callers_abi()->callers_sp;
}
+inline intptr_t* frame::link_or_null() const {
+ return link();
+}
+
inline intptr_t** frame::interpreter_frame_locals_addr() const {
return (intptr_t**) &(ijava_state()->locals);
}
diff --git a/src/hotspot/cpu/s390/matcher_s390.hpp b/src/hotspot/cpu/s390/matcher_s390.hpp
index ac55bd84dff10a5f79b3ae1376c31a37e0351d21..5c56ec5373b7d751e98c614ac6c3b192eb34fecd 100644
--- a/src/hotspot/cpu/s390/matcher_s390.hpp
+++ b/src/hotspot/cpu/s390/matcher_s390.hpp
@@ -153,4 +153,9 @@
// Implements a variant of EncodeISOArrayNode that encode ASCII only
static const bool supports_encode_ascii_array = true;
+ // Returns pre-selection estimated cost of a vector operation.
+ static int vector_op_pre_select_sz_estimate(int vopc, BasicType ety, int vlen) {
+ return 0;
+ }
+
#endif // CPU_S390_MATCHER_S390_HPP
diff --git a/src/hotspot/cpu/s390/s390.ad b/src/hotspot/cpu/s390/s390.ad
index 74ad8ef40d31cfd15d7b7091e290487e5e2e6785..d13afd1c8b4f95db6904005ef5af208dacfde00a 100644
--- a/src/hotspot/cpu/s390/s390.ad
+++ b/src/hotspot/cpu/s390/s390.ad
@@ -10273,14 +10273,13 @@ instruct string_inflate_const(Universe dummy, iRegP src, iRegP dst, iRegI tmp, i
%}
// StringCoding.java intrinsics
-instruct has_negatives(rarg5RegP ary1, iRegI len, iRegI result, roddRegI oddReg, revenRegI evenReg, iRegI tmp, flagsReg cr) %{
- match(Set result (HasNegatives ary1 len));
- effect(TEMP_DEF result, USE_KILL ary1, TEMP oddReg, TEMP evenReg, TEMP tmp, KILL cr); // R0, R1 are killed, too.
+instruct count_positives(iRegP ary1, iRegI len, iRegI result, iRegI tmp, flagsReg cr) %{
+ match(Set result (CountPositives ary1 len));
+ effect(TEMP_DEF result, TEMP tmp, KILL cr); // R0, R1 are killed, too.
ins_cost(300);
- format %{ "has negatives byte[] $ary1($len) -> $result" %}
+ format %{ "count positives byte[] $ary1($len) -> $result" %}
ins_encode %{
- __ has_negatives($result$$Register, $ary1$$Register, $len$$Register,
- $oddReg$$Register, $evenReg$$Register, $tmp$$Register);
+ __ count_positives($result$$Register, $ary1$$Register, $len$$Register, $tmp$$Register);
%}
ins_pipe(pipe_class_dummy);
%}
diff --git a/src/hotspot/cpu/x86/assembler_x86.cpp b/src/hotspot/cpu/x86/assembler_x86.cpp
index e9652fa04b24f55e4ed48a4239f860db14aee3b9..3505e081d38f2484c2fd915ae39d003dbd8c6278 100644
--- a/src/hotspot/cpu/x86/assembler_x86.cpp
+++ b/src/hotspot/cpu/x86/assembler_x86.cpp
@@ -300,12 +300,24 @@ void Assembler::emit_arith_b(int op1, int op2, Register dst, int imm8) {
void Assembler::emit_arith(int op1, int op2, Register dst, int32_t imm32) {
assert(isByte(op1) && isByte(op2), "wrong opcode");
- assert((op1 & 0x01) == 1, "should be 32bit operation");
- assert((op1 & 0x02) == 0, "sign-extension bit should not be set");
+ assert(op1 == 0x81, "Unexpected opcode");
if (is8bit(imm32)) {
emit_int24(op1 | 0x02, // set sign bit
op2 | encode(dst),
imm32 & 0xFF);
+ } else if (dst == rax) {
+ switch (op2) {
+ case 0xD0: emit_int8(0x15); break; // adc
+ case 0xC0: emit_int8(0x05); break; // add
+ case 0xE0: emit_int8(0x25); break; // and
+ case 0xF8: emit_int8(0x3D); break; // cmp
+ case 0xC8: emit_int8(0x0D); break; // or
+ case 0xD8: emit_int8(0x1D); break; // sbb
+ case 0xE8: emit_int8(0x2D); break; // sub
+ case 0xF0: emit_int8(0x35); break; // xor
+ default: ShouldNotReachHere();
+ }
+ emit_int32(imm32);
} else {
emit_int16(op1, (op2 | encode(dst)));
emit_int32(imm32);
@@ -929,6 +941,16 @@ address Assembler::locate_operand(address inst, WhichOperand which) {
tail_size = 1;
break;
+ case 0x15: // adc rax, #32
+ case 0x05: // add rax, #32
+ case 0x25: // and rax, #32
+ case 0x3D: // cmp rax, #32
+ case 0x0D: // or rax, #32
+ case 0x1D: // sbb rax, #32
+ case 0x2D: // sub rax, #32
+ case 0x35: // xor rax, #32
+ return which == end_pc_operand ? ip + 4 : ip;
+
case 0x9B:
switch (0xFF & *ip++) {
case 0xD9: // fnstcw a
@@ -954,6 +976,11 @@ address Assembler::locate_operand(address inst, WhichOperand which) {
debug_only(has_disp32 = true); // has both kinds of operands!
break;
+ case 0xA8: // testb rax, #8
+ return which == end_pc_operand ? ip + 1 : ip;
+ case 0xA9: // testl/testq rax, #32
+ return which == end_pc_operand ? ip + 4 : ip;
+
case 0xC1: // sal a, #8; sar a, #8; shl a, #8; shr a, #8
case 0xC6: // movb a, #8
case 0x80: // cmpb a, #8
@@ -1683,12 +1710,6 @@ void Assembler::cmpl(Address dst, int32_t imm32) {
emit_int32(imm32);
}
-void Assembler::cmp(Register dst, int32_t imm32) {
- prefix(dst);
- emit_int8((unsigned char)0x3D);
- emit_int32(imm32);
-}
-
void Assembler::cmpl(Register dst, int32_t imm32) {
prefix(dst);
emit_arith(0x81, 0xF8, dst, imm32);
@@ -2389,10 +2410,7 @@ void Assembler::ldmxcsr( Address src) {
void Assembler::leal(Register dst, Address src) {
InstructionMark im(this);
-#ifdef _LP64
- emit_int8(0x67); // addr32
prefix(src, dst);
-#endif // LP64
emit_int8((unsigned char)0x8D);
emit_operand(dst, src);
}
@@ -5775,8 +5793,13 @@ void Assembler::subss(XMMRegister dst, Address src) {
void Assembler::testb(Register dst, int imm8) {
NOT_LP64(assert(dst->has_byte_register(), "must have byte register"));
- (void) prefix_and_encode(dst->encoding(), true);
- emit_arith_b(0xF6, 0xC0, dst, imm8);
+ if (dst == rax) {
+ emit_int8((unsigned char)0xA8);
+ emit_int8(imm8);
+ } else {
+ (void) prefix_and_encode(dst->encoding(), true);
+ emit_arith_b(0xF6, 0xC0, dst, imm8);
+ }
}
void Assembler::testb(Address dst, int imm8) {
@@ -5787,14 +5810,34 @@ void Assembler::testb(Address dst, int imm8) {
emit_int8(imm8);
}
+void Assembler::testl(Address dst, int32_t imm32) {
+ if (imm32 >= 0 && is8bit(imm32)) {
+ testb(dst, imm32);
+ return;
+ }
+ InstructionMark im(this);
+ emit_int8((unsigned char)0xF7);
+ emit_operand(as_Register(0), dst);
+ emit_int32(imm32);
+}
+
void Assembler::testl(Register dst, int32_t imm32) {
+ if (imm32 >= 0 && is8bit(imm32) && dst->has_byte_register()) {
+ testb(dst, imm32);
+ return;
+ }
// not using emit_arith because test
// doesn't support sign-extension of
// 8bit operands
- int encode = dst->encoding();
- encode = prefix_and_encode(encode);
- emit_int16((unsigned char)0xF7, (0xC0 | encode));
- emit_int32(imm32);
+ if (dst == rax) {
+ emit_int8((unsigned char)0xA9);
+ emit_int32(imm32);
+ } else {
+ int encode = dst->encoding();
+ encode = prefix_and_encode(encode);
+ emit_int16((unsigned char)0xF7, (0xC0 | encode));
+ emit_int32(imm32);
+ }
}
void Assembler::testl(Register dst, Register src) {
@@ -8317,8 +8360,28 @@ void Assembler::vpbroadcastw(XMMRegister dst, Address src, int vector_len) {
emit_operand(dst, src);
}
-// xmm/mem sourced byte/word/dword/qword replicate
+void Assembler::vpsadbw(XMMRegister dst, XMMRegister nds, XMMRegister src, int vector_len) {
+ assert(UseAVX > 0, "requires some form of AVX");
+ InstructionAttr attributes(vector_len, /* vex_w */ false, /* legacy_mode */ _legacy_mode_bw, /* no_mask_reg */ true, /* uses_vl */ true);
+ int encode = vex_prefix_and_encode(dst->encoding(), nds->encoding(), src->encoding(), VEX_SIMD_66, VEX_OPCODE_0F, &attributes);
+ emit_int16((unsigned char)0xF6, (0xC0 | encode));
+}
+
+void Assembler::vpunpckhdq(XMMRegister dst, XMMRegister nds, XMMRegister src, int vector_len) {
+ assert(UseAVX > 0, "requires some form of AVX");
+ InstructionAttr attributes(vector_len, /* vex_w */ false, /* legacy_mode */ false, /* no_mask_reg */ true, /* uses_vl */ true);
+ int encode = vex_prefix_and_encode(dst->encoding(), nds->encoding(), src->encoding(), VEX_SIMD_66, VEX_OPCODE_0F, &attributes);
+ emit_int16(0x6A, (0xC0 | encode));
+}
+
+void Assembler::vpunpckldq(XMMRegister dst, XMMRegister nds, XMMRegister src, int vector_len) {
+ assert(UseAVX > 0, "requires some form of AVX");
+ InstructionAttr attributes(vector_len, /* vex_w */ false, /* legacy_mode */ false, /* no_mask_reg */ true, /* uses_vl */ true);
+ int encode = vex_prefix_and_encode(dst->encoding(), nds->encoding(), src->encoding(), VEX_SIMD_66, VEX_OPCODE_0F, &attributes);
+ emit_int16(0x62, (0xC0 | encode));
+}
+// xmm/mem sourced byte/word/dword/qword replicate
void Assembler::evpaddb(XMMRegister dst, KRegister mask, XMMRegister nds, XMMRegister src, bool merge, int vector_len) {
assert(VM_Version::supports_avx512bw() && (vector_len == AVX_512bit || VM_Version::supports_avx512vl()), "");
InstructionAttr attributes(vector_len, /* vex_w */ false, /* legacy_mode */ false, /* no_mask_reg */ false,/* uses_vl */ true);
@@ -9864,12 +9927,12 @@ void Assembler::vpbroadcastq(XMMRegister dst, Address src, int vector_len) {
void Assembler::evbroadcasti32x4(XMMRegister dst, Address src, int vector_len) {
assert(vector_len != Assembler::AVX_128bit, "");
- assert(VM_Version::supports_avx512dq(), "");
+ assert(VM_Version::supports_evex(), "");
assert(dst != xnoreg, "sanity");
InstructionMark im(this);
InstructionAttr attributes(vector_len, /* vex_w */ false, /* legacy_mode */ false, /* no_mask_reg */ true, /* uses_vl */ true);
attributes.set_rex_vex_w_reverted();
- attributes.set_address_attributes(/* tuple_type */ EVEX_T2, /* input_size_in_bits */ EVEX_64bit);
+ attributes.set_address_attributes(/* tuple_type */ EVEX_T4, /* input_size_in_bits */ EVEX_32bit);
// swap src<->dst for encoding
vex_prefix(src, 0, dst->encoding(), VEX_SIMD_66, VEX_OPCODE_0F_38, &attributes);
emit_int8(0x5A);
@@ -12993,6 +13056,10 @@ void Assembler::subq(Register dst, Register src) {
}
void Assembler::testq(Address dst, int32_t imm32) {
+ if (imm32 >= 0) {
+ testl(dst, imm32);
+ return;
+ }
InstructionMark im(this);
emit_int16(get_prefixq(dst), (unsigned char)0xF7);
emit_operand(as_Register(0), dst);
@@ -13000,13 +13067,23 @@ void Assembler::testq(Address dst, int32_t imm32) {
}
void Assembler::testq(Register dst, int32_t imm32) {
+ if (imm32 >= 0) {
+ testl(dst, imm32);
+ return;
+ }
// not using emit_arith because test
// doesn't support sign-extension of
// 8bit operands
- int encode = dst->encoding();
- encode = prefixq_and_encode(encode);
- emit_int16((unsigned char)0xF7, (0xC0 | encode));
- emit_int32(imm32);
+ if (dst == rax) {
+ prefix(REX_W);
+ emit_int8((unsigned char)0xA9);
+ emit_int32(imm32);
+ } else {
+ int encode = dst->encoding();
+ encode = prefixq_and_encode(encode);
+ emit_int16((unsigned char)0xF7, (0xC0 | encode));
+ emit_int32(imm32);
+ }
}
void Assembler::testq(Register dst, Register src) {
diff --git a/src/hotspot/cpu/x86/assembler_x86.hpp b/src/hotspot/cpu/x86/assembler_x86.hpp
index f21dd901c5d8d237df8b1d1ab37d896a38a30e37..7141e4b96c4141f2184cab62bbff2428313bf649 100644
--- a/src/hotspot/cpu/x86/assembler_x86.hpp
+++ b/src/hotspot/cpu/x86/assembler_x86.hpp
@@ -1081,15 +1081,12 @@ private:
void cmpb(Address dst, int imm8);
void cmpl(Address dst, int32_t imm32);
-
- void cmp(Register dst, int32_t imm32);
void cmpl(Register dst, int32_t imm32);
void cmpl(Register dst, Register src);
void cmpl(Register dst, Address src);
void cmpq(Address dst, int32_t imm32);
void cmpq(Address dst, Register src);
-
void cmpq(Register dst, int32_t imm32);
void cmpq(Register dst, Register src);
void cmpq(Register dst, Address src);
@@ -1933,10 +1930,17 @@ private:
// Interleave Low Doublewords
void punpckldq(XMMRegister dst, XMMRegister src);
void punpckldq(XMMRegister dst, Address src);
+ void vpunpckldq(XMMRegister dst, XMMRegister nds, XMMRegister src, int vector_len);
+
+ // Interleave High Doublewords
+ void vpunpckhdq(XMMRegister dst, XMMRegister nds, XMMRegister src, int vector_len);
// Interleave Low Quadwords
void punpcklqdq(XMMRegister dst, XMMRegister src);
+ // Vector sum of absolute difference.
+ void vpsadbw(XMMRegister dst, XMMRegister nds, XMMRegister src, int vector_len);
+
#ifndef _LP64 // no 32bit push/pop on amd64
void pushl(Address src);
#endif
@@ -2092,9 +2096,10 @@ private:
void subss(XMMRegister dst, Address src);
void subss(XMMRegister dst, XMMRegister src);
- void testb(Register dst, int imm8);
void testb(Address dst, int imm8);
+ void testb(Register dst, int imm8);
+ void testl(Address dst, int32_t imm32);
void testl(Register dst, int32_t imm32);
void testl(Register dst, Register src);
void testl(Register dst, Address src);
diff --git a/src/hotspot/cpu/x86/c2_MacroAssembler_x86.cpp b/src/hotspot/cpu/x86/c2_MacroAssembler_x86.cpp
index 400bcec45e2a01ac946c2e84f95e0e56cb6efd77..6d8b9101303508409fa4a8b2b01770670ed69fea 100644
--- a/src/hotspot/cpu/x86/c2_MacroAssembler_x86.cpp
+++ b/src/hotspot/cpu/x86/c2_MacroAssembler_x86.cpp
@@ -3374,18 +3374,19 @@ void C2_MacroAssembler::string_compare(Register str1, Register str2,
}
// Search for Non-ASCII character (Negative byte value) in a byte array,
-// return true if it has any and false otherwise.
+// return the index of the first such character, otherwise the length
+// of the array segment searched.
// ..\jdk\src\java.base\share\classes\java\lang\StringCoding.java
// @IntrinsicCandidate
-// private static boolean hasNegatives(byte[] ba, int off, int len) {
+// public static int countPositives(byte[] ba, int off, int len) {
// for (int i = off; i < off + len; i++) {
// if (ba[i] < 0) {
-// return true;
+// return i - off;
// }
// }
-// return false;
+// return len;
// }
-void C2_MacroAssembler::has_negatives(Register ary1, Register len,
+void C2_MacroAssembler::count_positives(Register ary1, Register len,
Register result, Register tmp1,
XMMRegister vec1, XMMRegister vec2, KRegister mask1, KRegister mask2) {
// rsi: byte array
@@ -3394,17 +3395,18 @@ void C2_MacroAssembler::has_negatives(Register ary1, Register len,
ShortBranchVerifier sbv(this);
assert_different_registers(ary1, len, result, tmp1);
assert_different_registers(vec1, vec2);
- Label TRUE_LABEL, FALSE_LABEL, DONE, COMPARE_CHAR, COMPARE_VECTORS, COMPARE_BYTE;
+ Label ADJUST, TAIL_ADJUST, DONE, TAIL_START, CHAR_ADJUST, COMPARE_CHAR, COMPARE_VECTORS, COMPARE_BYTE;
+ movl(result, len); // copy
// len == 0
testl(len, len);
- jcc(Assembler::zero, FALSE_LABEL);
+ jcc(Assembler::zero, DONE);
if ((AVX3Threshold == 0) && (UseAVX > 2) && // AVX512
VM_Version::supports_avx512vlbw() &&
VM_Version::supports_bmi2()) {
- Label test_64_loop, test_tail;
+ Label test_64_loop, test_tail, BREAK_LOOP;
Register tmp3_aliased = len;
movl(tmp1, len);
@@ -3421,16 +3423,15 @@ void C2_MacroAssembler::has_negatives(Register ary1, Register len,
// Check whether our 64 elements of size byte contain negatives
evpcmpgtb(mask1, vec2, Address(ary1, len, Address::times_1), Assembler::AVX_512bit);
kortestql(mask1, mask1);
- jcc(Assembler::notZero, TRUE_LABEL);
+ jcc(Assembler::notZero, BREAK_LOOP);
addptr(len, 64);
jccb(Assembler::notZero, test_64_loop);
-
bind(test_tail);
// bail out when there is nothing to be done
testl(tmp1, -1);
- jcc(Assembler::zero, FALSE_LABEL);
+ jcc(Assembler::zero, DONE);
// ~(~0 << len) applied up to two times (for 32-bit scenario)
#ifdef _LP64
@@ -3467,21 +3468,30 @@ void C2_MacroAssembler::has_negatives(Register ary1, Register len,
#endif
evpcmpgtb(mask1, mask2, vec2, Address(ary1, 0), Assembler::AVX_512bit);
ktestq(mask1, mask2);
- jcc(Assembler::notZero, TRUE_LABEL);
+ jcc(Assembler::zero, DONE);
- jmp(FALSE_LABEL);
+ bind(BREAK_LOOP);
+ // At least one byte in the last 64 bytes is negative.
+ // Set up to look at the last 64 bytes as if they were a tail
+ lea(ary1, Address(ary1, len, Address::times_1));
+ addptr(result, len);
+ // Ignore the very last byte: if all others are positive,
+ // it must be negative, so we can skip right to the 2+1 byte
+ // end comparison at this point
+ orl(result, 63);
+ movl(len, 63);
+ // Fallthru to tail compare
} else {
- movl(result, len); // copy
if (UseAVX >= 2 && UseSSE >= 2) {
// With AVX2, use 32-byte vector compare
- Label COMPARE_WIDE_VECTORS, COMPARE_TAIL;
+ Label COMPARE_WIDE_VECTORS, BREAK_LOOP;
// Compare 32-byte vectors
- andl(result, 0x0000001f); // tail count (in bytes)
- andl(len, 0xffffffe0); // vector count (in bytes)
- jccb(Assembler::zero, COMPARE_TAIL);
+ testl(len, 0xffffffe0); // vector count (in bytes)
+ jccb(Assembler::zero, TAIL_START);
+ andl(len, 0xffffffe0);
lea(ary1, Address(ary1, len, Address::times_1));
negptr(len);
@@ -3492,30 +3502,42 @@ void C2_MacroAssembler::has_negatives(Register ary1, Register len,
bind(COMPARE_WIDE_VECTORS);
vmovdqu(vec1, Address(ary1, len, Address::times_1));
vptest(vec1, vec2);
- jccb(Assembler::notZero, TRUE_LABEL);
+ jccb(Assembler::notZero, BREAK_LOOP);
addptr(len, 32);
- jcc(Assembler::notZero, COMPARE_WIDE_VECTORS);
+ jccb(Assembler::notZero, COMPARE_WIDE_VECTORS);
- testl(result, result);
- jccb(Assembler::zero, FALSE_LABEL);
+ testl(result, 0x0000001f); // any bytes remaining?
+ jcc(Assembler::zero, DONE);
- vmovdqu(vec1, Address(ary1, result, Address::times_1, -32));
+ // Quick test using the already prepared vector mask
+ movl(len, result);
+ andl(len, 0x0000001f);
+ vmovdqu(vec1, Address(ary1, len, Address::times_1, -32));
vptest(vec1, vec2);
- jccb(Assembler::notZero, TRUE_LABEL);
- jmpb(FALSE_LABEL);
+ jcc(Assembler::zero, DONE);
+ // There are zeros, jump to the tail to determine exactly where
+ jmpb(TAIL_START);
- bind(COMPARE_TAIL); // len is zero
- movl(len, result);
+ bind(BREAK_LOOP);
+ // At least one byte in the last 32-byte vector is negative.
+ // Set up to look at the last 32 bytes as if they were a tail
+ lea(ary1, Address(ary1, len, Address::times_1));
+ addptr(result, len);
+ // Ignore the very last byte: if all others are positive,
+ // it must be negative, so we can skip right to the 2+1 byte
+ // end comparison at this point
+ orl(result, 31);
+ movl(len, 31);
// Fallthru to tail compare
} else if (UseSSE42Intrinsics) {
// With SSE4.2, use double quad vector compare
- Label COMPARE_WIDE_VECTORS, COMPARE_TAIL;
+ Label COMPARE_WIDE_VECTORS, BREAK_LOOP;
// Compare 16-byte vectors
- andl(result, 0x0000000f); // tail count (in bytes)
- andl(len, 0xfffffff0); // vector count (in bytes)
- jcc(Assembler::zero, COMPARE_TAIL);
+ testl(len, 0xfffffff0); // vector count (in bytes)
+ jcc(Assembler::zero, TAIL_START);
+ andl(len, 0xfffffff0);
lea(ary1, Address(ary1, len, Address::times_1));
negptr(len);
@@ -3526,23 +3548,36 @@ void C2_MacroAssembler::has_negatives(Register ary1, Register len,
bind(COMPARE_WIDE_VECTORS);
movdqu(vec1, Address(ary1, len, Address::times_1));
ptest(vec1, vec2);
- jcc(Assembler::notZero, TRUE_LABEL);
+ jccb(Assembler::notZero, BREAK_LOOP);
addptr(len, 16);
- jcc(Assembler::notZero, COMPARE_WIDE_VECTORS);
+ jccb(Assembler::notZero, COMPARE_WIDE_VECTORS);
- testl(result, result);
- jcc(Assembler::zero, FALSE_LABEL);
+ testl(result, 0x0000000f); // len is zero, any bytes remaining?
+ jcc(Assembler::zero, DONE);
- movdqu(vec1, Address(ary1, result, Address::times_1, -16));
+ // Quick test using the already prepared vector mask
+ movl(len, result);
+ andl(len, 0x0000000f); // tail count (in bytes)
+ movdqu(vec1, Address(ary1, len, Address::times_1, -16));
ptest(vec1, vec2);
- jccb(Assembler::notZero, TRUE_LABEL);
- jmpb(FALSE_LABEL);
+ jcc(Assembler::zero, DONE);
+ jmpb(TAIL_START);
- bind(COMPARE_TAIL); // len is zero
- movl(len, result);
+ bind(BREAK_LOOP);
+ // At least one byte in the last 16-byte vector is negative.
+ // Set up and look at the last 16 bytes as if they were a tail
+ lea(ary1, Address(ary1, len, Address::times_1));
+ addptr(result, len);
+ // Ignore the very last byte: if all others are positive,
+ // it must be negative, so we can skip right to the 2+1 byte
+ // end comparison at this point
+ orl(result, 15);
+ movl(len, 15);
// Fallthru to tail compare
}
}
+
+ bind(TAIL_START);
// Compare 4-byte vectors
andl(len, 0xfffffffc); // vector count (in bytes)
jccb(Assembler::zero, COMPARE_CHAR);
@@ -3553,34 +3588,45 @@ void C2_MacroAssembler::has_negatives(Register ary1, Register len,
bind(COMPARE_VECTORS);
movl(tmp1, Address(ary1, len, Address::times_1));
andl(tmp1, 0x80808080);
- jccb(Assembler::notZero, TRUE_LABEL);
+ jccb(Assembler::notZero, TAIL_ADJUST);
addptr(len, 4);
- jcc(Assembler::notZero, COMPARE_VECTORS);
+ jccb(Assembler::notZero, COMPARE_VECTORS);
- // Compare trailing char (final 2 bytes), if any
+ // Compare trailing char (final 2-3 bytes), if any
bind(COMPARE_CHAR);
+
testl(result, 0x2); // tail char
jccb(Assembler::zero, COMPARE_BYTE);
load_unsigned_short(tmp1, Address(ary1, 0));
andl(tmp1, 0x00008080);
- jccb(Assembler::notZero, TRUE_LABEL);
- subptr(result, 2);
+ jccb(Assembler::notZero, CHAR_ADJUST);
lea(ary1, Address(ary1, 2));
bind(COMPARE_BYTE);
testl(result, 0x1); // tail byte
- jccb(Assembler::zero, FALSE_LABEL);
+ jccb(Assembler::zero, DONE);
load_unsigned_byte(tmp1, Address(ary1, 0));
- andl(tmp1, 0x00000080);
- jccb(Assembler::notEqual, TRUE_LABEL);
- jmpb(FALSE_LABEL);
-
- bind(TRUE_LABEL);
- movl(result, 1); // return true
+ testl(tmp1, 0x00000080);
+ jccb(Assembler::zero, DONE);
+ subptr(result, 1);
jmpb(DONE);
- bind(FALSE_LABEL);
- xorl(result, result); // return false
+ bind(TAIL_ADJUST);
+ // there are negative bits in the last 4 byte block.
+ // Adjust result and check the next three bytes
+ addptr(result, len);
+ orl(result, 3);
+ lea(ary1, Address(ary1, len, Address::times_1));
+ jmpb(COMPARE_CHAR);
+
+ bind(CHAR_ADJUST);
+ // We are looking at a char + optional byte tail, and found that one
+ // of the bytes in the char is negative. Adjust the result, check the
+ // first byte and readjust if needed.
+ andl(result, 0xfffffffc);
+ testl(tmp1, 0x00000080); // little-endian, so lowest byte comes first
+ jccb(Assembler::notZero, DONE);
+ addptr(result, 1);
// That's it
bind(DONE);
@@ -3590,6 +3636,7 @@ void C2_MacroAssembler::has_negatives(Register ary1, Register len,
vpxor(vec2, vec2);
}
}
+
// Compare char[] or byte[] arrays aligned to 4 bytes or substrings.
void C2_MacroAssembler::arrays_equals(bool is_array_equ, Register ary1, Register ary2,
Register limit, Register result, Register chr,
@@ -4321,6 +4368,94 @@ void C2_MacroAssembler::vector_maskall_operation(KRegister dst, Register src, in
}
}
+
+//
+// Following is lookup table based popcount computation algorithm:-
+// Index Bit set count
+// [ 0000 -> 0,
+// 0001 -> 1,
+// 0010 -> 1,
+// 0011 -> 2,
+// 0100 -> 1,
+// 0101 -> 2,
+// 0110 -> 2,
+// 0111 -> 3,
+// 1000 -> 1,
+// 1001 -> 2,
+// 1010 -> 3,
+// 1011 -> 3,
+// 1100 -> 2,
+// 1101 -> 3,
+// 1111 -> 4 ]
+// a. Count the number of 1s in 4 LSB bits of each byte. These bits are used as
+// shuffle indices for lookup table access.
+// b. Right shift each byte of vector lane by 4 positions.
+// c. Count the number of 1s in 4 MSB bits each byte. These bits are used as
+// shuffle indices for lookup table access.
+// d. Add the bitset count of upper and lower 4 bits of each byte.
+// e. Unpack double words to quad words and compute sum of absolute difference of bitset
+// count of all the bytes of a quadword.
+// f. Perform step e. for upper 128bit vector lane.
+// g. Pack the bitset count of quadwords back to double word.
+// h. Unpacking and packing operations are not needed for 64bit vector lane.
+void C2_MacroAssembler::vector_popcount_int(XMMRegister dst, XMMRegister src, XMMRegister xtmp1,
+ XMMRegister xtmp2, XMMRegister xtmp3, Register rtmp,
+ int vec_enc) {
+ if (VM_Version::supports_avx512_vpopcntdq()) {
+ vpopcntd(dst, src, vec_enc);
+ } else {
+ assert((vec_enc == Assembler::AVX_512bit && VM_Version::supports_avx512bw()) || VM_Version::supports_avx2(), "");
+ movl(rtmp, 0x0F0F0F0F);
+ movdl(xtmp1, rtmp);
+ vpbroadcastd(xtmp1, xtmp1, vec_enc);
+ if (Assembler::AVX_512bit == vec_enc) {
+ evmovdqul(xtmp2, k0, ExternalAddress(StubRoutines::x86::vector_popcount_lut()), false, vec_enc, rtmp);
+ } else {
+ vmovdqu(xtmp2, ExternalAddress(StubRoutines::x86::vector_popcount_lut()), rtmp);
+ }
+ vpand(xtmp3, src, xtmp1, vec_enc);
+ vpshufb(xtmp3, xtmp2, xtmp3, vec_enc);
+ vpsrlw(dst, src, 4, vec_enc);
+ vpand(dst, dst, xtmp1, vec_enc);
+ vpshufb(dst, xtmp2, dst, vec_enc);
+ vpaddb(xtmp3, dst, xtmp3, vec_enc);
+ vpxor(xtmp1, xtmp1, xtmp1, vec_enc);
+ vpunpckhdq(dst, xtmp3, xtmp1, vec_enc);
+ vpsadbw(dst, dst, xtmp1, vec_enc);
+ vpunpckldq(xtmp2, xtmp3, xtmp1, vec_enc);
+ vpsadbw(xtmp2, xtmp2, xtmp1, vec_enc);
+ vpackuswb(dst, xtmp2, dst, vec_enc);
+ }
+}
+
+void C2_MacroAssembler::vector_popcount_long(XMMRegister dst, XMMRegister src, XMMRegister xtmp1,
+ XMMRegister xtmp2, XMMRegister xtmp3, Register rtmp,
+ int vec_enc) {
+ if (VM_Version::supports_avx512_vpopcntdq()) {
+ vpopcntq(dst, src, vec_enc);
+ } else if (vec_enc == Assembler::AVX_512bit) {
+ assert(VM_Version::supports_avx512bw(), "");
+ movl(rtmp, 0x0F0F0F0F);
+ movdl(xtmp1, rtmp);
+ vpbroadcastd(xtmp1, xtmp1, vec_enc);
+ evmovdqul(xtmp2, k0, ExternalAddress(StubRoutines::x86::vector_popcount_lut()), true, vec_enc, rtmp);
+ vpandq(xtmp3, src, xtmp1, vec_enc);
+ vpshufb(xtmp3, xtmp2, xtmp3, vec_enc);
+ vpsrlw(dst, src, 4, vec_enc);
+ vpandq(dst, dst, xtmp1, vec_enc);
+ vpshufb(dst, xtmp2, dst, vec_enc);
+ vpaddb(xtmp3, dst, xtmp3, vec_enc);
+ vpxorq(xtmp1, xtmp1, xtmp1, vec_enc);
+ vpsadbw(dst, xtmp3, xtmp1, vec_enc);
+ } else {
+ // We do not see any performance benefit of running
+ // above instruction sequence on 256 bit vector which
+ // can operate over maximum 4 long elements.
+ ShouldNotReachHere();
+ }
+ evpmovqd(dst, dst, vec_enc);
+}
+
#ifndef _LP64
void C2_MacroAssembler::vector_maskall_operation32(KRegister dst, Register src, KRegister tmp, int mask_len) {
assert(VM_Version::supports_avx512bw(), "");
diff --git a/src/hotspot/cpu/x86/c2_MacroAssembler_x86.hpp b/src/hotspot/cpu/x86/c2_MacroAssembler_x86.hpp
index 0e6a381430f2ae087e7a3a4eaa3c592b0a248fd5..5ecdf20700dfb11c584898481acdf48dbd2dfd49 100644
--- a/src/hotspot/cpu/x86/c2_MacroAssembler_x86.hpp
+++ b/src/hotspot/cpu/x86/c2_MacroAssembler_x86.hpp
@@ -271,11 +271,10 @@ public:
XMMRegister vec1, int ae, KRegister mask = knoreg);
// Search for Non-ASCII character (Negative byte value) in a byte array,
- // return true if it has any and false otherwise.
- void has_negatives(Register ary1, Register len,
- Register result, Register tmp1,
- XMMRegister vec1, XMMRegister vec2, KRegister mask1 = knoreg, KRegister mask2 = knoreg);
-
+ // return index of the first such character, otherwise len.
+ void count_positives(Register ary1, Register len,
+ Register result, Register tmp1,
+ XMMRegister vec1, XMMRegister vec2, KRegister mask1 = knoreg, KRegister mask2 = knoreg);
// Compare char[] or byte[] arrays.
void arrays_equals(bool is_array_equ, Register ary1, Register ary2,
Register limit, Register result, Register chr,
@@ -317,4 +316,12 @@ public:
void evpternlog(XMMRegister dst, int func, KRegister mask, XMMRegister src2, Address src3,
bool merge, BasicType bt, int vlen_enc);
+ void vector_popcount_int(XMMRegister dst, XMMRegister src, XMMRegister xtmp1,
+ XMMRegister xtmp2, XMMRegister xtmp3, Register rtmp,
+ int vec_enc);
+
+ void vector_popcount_long(XMMRegister dst, XMMRegister src, XMMRegister xtmp1,
+ XMMRegister xtmp2, XMMRegister xtmp3, Register rtmp,
+ int vec_enc);
+
#endif // CPU_X86_C2_MACROASSEMBLER_X86_HPP
diff --git a/src/hotspot/cpu/x86/copy_x86.hpp b/src/hotspot/cpu/x86/copy_x86.hpp
index 74228b57f6c58949af2c1baa041cb899bcfb3fd5..1798e74eb0636f9b16b53beaee83397241f379d8 100644
--- a/src/hotspot/cpu/x86/copy_x86.hpp
+++ b/src/hotspot/cpu/x86/copy_x86.hpp
@@ -148,22 +148,7 @@ static void pd_disjoint_words(const HeapWord* from, HeapWord* to, size_t count)
static void pd_disjoint_words_atomic(const HeapWord* from, HeapWord* to, size_t count) {
#ifdef AMD64
- switch (count) {
- case 8: to[7] = from[7];
- case 7: to[6] = from[6];
- case 6: to[5] = from[5];
- case 5: to[4] = from[4];
- case 4: to[3] = from[3];
- case 3: to[2] = from[2];
- case 2: to[1] = from[1];
- case 1: to[0] = from[0];
- case 0: break;
- default:
- while (count-- > 0) {
- *to++ = *from++;
- }
- break;
- }
+ shared_disjoint_words_atomic(from, to, count);
#else
// pd_disjoint_words is word-atomic in this implementation.
pd_disjoint_words(from, to, count);
diff --git a/src/hotspot/cpu/x86/frame_x86.inline.hpp b/src/hotspot/cpu/x86/frame_x86.inline.hpp
index 733a357d5fe3e7fe9a9e445b64b1ebd4b7c381ea..23072238e16aa3caf798e6f0690f3e5f4ca9ccaf 100644
--- a/src/hotspot/cpu/x86/frame_x86.inline.hpp
+++ b/src/hotspot/cpu/x86/frame_x86.inline.hpp
@@ -138,10 +138,13 @@ inline intptr_t* frame::id(void) const { return unextended_sp(); }
inline bool frame::is_older(intptr_t* id) const { assert(this->id() != NULL && id != NULL, "NULL frame id");
return this->id() > id ; }
-
-
inline intptr_t* frame::link() const { return (intptr_t*) *(intptr_t **)addr_at(link_offset); }
+inline intptr_t* frame::link_or_null() const {
+ intptr_t** ptr = (intptr_t **)addr_at(link_offset);
+ return os::is_readable_pointer(ptr) ? *ptr : NULL;
+}
+
inline intptr_t* frame::unextended_sp() const { return _unextended_sp; }
// Return address:
diff --git a/src/hotspot/cpu/x86/gc/g1/g1BarrierSetAssembler_x86.cpp b/src/hotspot/cpu/x86/gc/g1/g1BarrierSetAssembler_x86.cpp
index 6525b13c5c253e54b3b2a4bce288a506ed582304..475a92d0f43a5264b37765917bb07f7c764f8c1e 100644
--- a/src/hotspot/cpu/x86/gc/g1/g1BarrierSetAssembler_x86.cpp
+++ b/src/hotspot/cpu/x86/gc/g1/g1BarrierSetAssembler_x86.cpp
@@ -67,7 +67,7 @@ void G1BarrierSetAssembler::gen_write_ref_array_pre_barrier(MacroAssembler* masm
__ jcc(Assembler::equal, filtered);
- __ pusha(); // push registers
+ __ push_call_clobbered_registers(false /* save_fpu */);
#ifdef _LP64
if (count == c_rarg0) {
if (addr == c_rarg1) {
@@ -90,7 +90,7 @@ void G1BarrierSetAssembler::gen_write_ref_array_pre_barrier(MacroAssembler* masm
__ call_VM_leaf(CAST_FROM_FN_PTR(address, G1BarrierSetRuntime::write_ref_array_pre_oop_entry),
addr, count);
#endif
- __ popa();
+ __ pop_call_clobbered_registers(false /* save_fpu */);
__ bind(filtered);
}
@@ -98,7 +98,7 @@ void G1BarrierSetAssembler::gen_write_ref_array_pre_barrier(MacroAssembler* masm
void G1BarrierSetAssembler::gen_write_ref_array_post_barrier(MacroAssembler* masm, DecoratorSet decorators,
Register addr, Register count, Register tmp) {
- __ pusha(); // push registers (overkill)
+ __ push_call_clobbered_registers(false /* save_fpu */);
#ifdef _LP64
if (c_rarg0 == count) { // On win64 c_rarg0 == rcx
assert_different_registers(c_rarg1, addr);
@@ -114,7 +114,7 @@ void G1BarrierSetAssembler::gen_write_ref_array_post_barrier(MacroAssembler* mas
__ call_VM_leaf(CAST_FROM_FN_PTR(address, G1BarrierSetRuntime::write_ref_array_post_entry),
addr, count);
#endif
- __ popa();
+ __ pop_call_clobbered_registers(false /* save_fpu */);
}
void G1BarrierSetAssembler::load_at(MacroAssembler* masm, DecoratorSet decorators, BasicType type,
@@ -204,14 +204,15 @@ void G1BarrierSetAssembler::g1_write_barrier_pre(MacroAssembler* masm,
__ jmp(done);
__ bind(runtime);
- // save the live input values
- if(tosca_live) __ push(rax);
- if (obj != noreg && obj != rax)
- __ push(obj);
+ // Determine and save the live input values
+ RegSet saved;
+ if (tosca_live) saved += RegSet::of(rax);
+ if (obj != noreg && obj != rax) saved += RegSet::of(obj);
+ if (pre_val != rax) saved += RegSet::of(pre_val);
+ NOT_LP64( saved += RegSet::of(thread); )
- if (pre_val != rax)
- __ push(pre_val);
+ __ push_set(saved);
// Calling the runtime using the regular call_VM_leaf mechanism generates
// code (generated by InterpreterMacroAssember::call_VM_leaf_base)
@@ -225,8 +226,6 @@ void G1BarrierSetAssembler::g1_write_barrier_pre(MacroAssembler* masm,
// So when we do not have have a full interpreter frame on the stack
// expand_call should be passed true.
- NOT_LP64( __ push(thread); )
-
if (expand_call) {
LP64_ONLY( assert(pre_val != c_rarg1, "smashed arg"); )
#ifdef _LP64
@@ -244,17 +243,7 @@ void G1BarrierSetAssembler::g1_write_barrier_pre(MacroAssembler* masm,
} else {
__ call_VM_leaf(CAST_FROM_FN_PTR(address, G1BarrierSetRuntime::write_ref_field_pre_entry), pre_val, thread);
}
-
- NOT_LP64( __ pop(thread); )
-
- // save the live input values
- if (pre_val != rax)
- __ pop(pre_val);
-
- if (obj != noreg && obj != rax)
- __ pop(obj);
-
- if(tosca_live) __ pop(rax);
+ __ pop_set(saved);
__ bind(done);
}
@@ -328,21 +317,16 @@ void G1BarrierSetAssembler::g1_write_barrier_post(MacroAssembler* masm,
__ bind(runtime);
// save the live input values
- __ push(store_addr);
-#ifdef _LP64
- __ call_VM_leaf(CAST_FROM_FN_PTR(address, G1BarrierSetRuntime::write_ref_field_post_entry), card_addr, r15_thread);
-#else
- __ push(thread);
+ RegSet saved = RegSet::of(store_addr NOT_LP64(COMMA thread));
+ __ push_set(saved);
__ call_VM_leaf(CAST_FROM_FN_PTR(address, G1BarrierSetRuntime::write_ref_field_post_entry), card_addr, thread);
- __ pop(thread);
-#endif
- __ pop(store_addr);
+ __ pop_set(saved);
__ bind(done);
}
void G1BarrierSetAssembler::oop_store_at(MacroAssembler* masm, DecoratorSet decorators, BasicType type,
- Address dst, Register val, Register tmp1, Register tmp2) {
+ Address dst, Register val, Register tmp1, Register tmp2, Register tmp3) {
bool in_heap = (decorators & IN_HEAP) != 0;
bool as_normal = (decorators & AS_NORMAL) != 0;
assert((decorators & IS_DEST_UNINITIALIZED) == 0, "unsupported");
@@ -350,7 +334,6 @@ void G1BarrierSetAssembler::oop_store_at(MacroAssembler* masm, DecoratorSet deco
bool needs_pre_barrier = as_normal;
bool needs_post_barrier = val != noreg && in_heap;
- Register tmp3 = LP64_ONLY(r8) NOT_LP64(rsi);
Register rthread = LP64_ONLY(r15_thread) NOT_LP64(rcx);
// flatten object address if needed
// We do it regardless of precise because we need the registers
@@ -379,7 +362,7 @@ void G1BarrierSetAssembler::oop_store_at(MacroAssembler* masm, DecoratorSet deco
false /* expand_call */);
}
if (val == noreg) {
- BarrierSetAssembler::store_at(masm, decorators, type, Address(tmp1, 0), val, noreg, noreg);
+ BarrierSetAssembler::store_at(masm, decorators, type, Address(tmp1, 0), val, noreg, noreg, noreg);
} else {
Register new_val = val;
if (needs_post_barrier) {
@@ -389,7 +372,7 @@ void G1BarrierSetAssembler::oop_store_at(MacroAssembler* masm, DecoratorSet deco
__ movptr(new_val, val);
}
}
- BarrierSetAssembler::store_at(masm, decorators, type, Address(tmp1, 0), val, noreg, noreg);
+ BarrierSetAssembler::store_at(masm, decorators, type, Address(tmp1, 0), val, noreg, noreg, noreg);
if (needs_post_barrier) {
g1_write_barrier_post(masm /*masm*/,
tmp1 /* store_adr */,
@@ -496,13 +479,13 @@ void G1BarrierSetAssembler::generate_c1_pre_barrier_runtime_stub(StubAssembler*
__ bind(runtime);
- __ save_live_registers_no_oop_map(true);
+ __ push_call_clobbered_registers();
// load the pre-value
__ load_parameter(0, rcx);
__ call_VM_leaf(CAST_FROM_FN_PTR(address, G1BarrierSetRuntime::write_ref_field_pre_entry), rcx, thread);
- __ restore_live_registers(true);
+ __ pop_call_clobbered_registers();
__ bind(done);
@@ -515,9 +498,6 @@ void G1BarrierSetAssembler::generate_c1_pre_barrier_runtime_stub(StubAssembler*
void G1BarrierSetAssembler::generate_c1_post_barrier_runtime_stub(StubAssembler* sasm) {
__ prologue("g1_post_barrier", false);
- // arg0: store_address
- Address store_addr(rbp, 2*BytesPerWord);
-
CardTableBarrierSet* ct =
barrier_set_cast(BarrierSet::barrier_set());
@@ -573,12 +553,11 @@ void G1BarrierSetAssembler::generate_c1_post_barrier_runtime_stub(StubAssembler*
__ jmp(enqueued);
__ bind(runtime);
-
- __ save_live_registers_no_oop_map(true);
+ __ push_call_clobbered_registers();
__ call_VM_leaf(CAST_FROM_FN_PTR(address, G1BarrierSetRuntime::write_ref_field_post_entry), card_addr, thread);
- __ restore_live_registers(true);
+ __ pop_call_clobbered_registers();
__ bind(enqueued);
__ pop(rdx);
diff --git a/src/hotspot/cpu/x86/gc/g1/g1BarrierSetAssembler_x86.hpp b/src/hotspot/cpu/x86/gc/g1/g1BarrierSetAssembler_x86.hpp
index 94bbadc7b2b14622e7a168a6641d6615200bfe2f..a5695f5657a4ad6a10ed8fc1687959f6b55f2ecb 100644
--- a/src/hotspot/cpu/x86/gc/g1/g1BarrierSetAssembler_x86.hpp
+++ b/src/hotspot/cpu/x86/gc/g1/g1BarrierSetAssembler_x86.hpp
@@ -54,7 +54,7 @@ class G1BarrierSetAssembler: public ModRefBarrierSetAssembler {
Register tmp2);
virtual void oop_store_at(MacroAssembler* masm, DecoratorSet decorators, BasicType type,
- Address dst, Register val, Register tmp1, Register tmp2);
+ Address dst, Register val, Register tmp1, Register tmp2, Register tmp3);
public:
void gen_pre_barrier_stub(LIR_Assembler* ce, G1PreBarrierStub* stub);
diff --git a/src/hotspot/cpu/x86/gc/shared/barrierSetAssembler_x86.cpp b/src/hotspot/cpu/x86/gc/shared/barrierSetAssembler_x86.cpp
index 55823bdf217c33b059b7066d38e310fd61056d2d..930926bbb17652308db427ae09242dba1db94451 100644
--- a/src/hotspot/cpu/x86/gc/shared/barrierSetAssembler_x86.cpp
+++ b/src/hotspot/cpu/x86/gc/shared/barrierSetAssembler_x86.cpp
@@ -103,7 +103,7 @@ void BarrierSetAssembler::load_at(MacroAssembler* masm, DecoratorSet decorators,
}
void BarrierSetAssembler::store_at(MacroAssembler* masm, DecoratorSet decorators, BasicType type,
- Address dst, Register val, Register tmp1, Register tmp2) {
+ Address dst, Register val, Register tmp1, Register tmp2, Register tmp3) {
bool in_heap = (decorators & IN_HEAP) != 0;
bool in_native = (decorators & IN_NATIVE) != 0;
bool is_not_null = (decorators & IS_NOT_NULL) != 0;
diff --git a/src/hotspot/cpu/x86/gc/shared/barrierSetAssembler_x86.hpp b/src/hotspot/cpu/x86/gc/shared/barrierSetAssembler_x86.hpp
index 3c63c00e4dbcb8b4fe1fa1c5e34b84684d9691e7..085238d60b55f2caa4dde806b5409cd5864d8a35 100644
--- a/src/hotspot/cpu/x86/gc/shared/barrierSetAssembler_x86.hpp
+++ b/src/hotspot/cpu/x86/gc/shared/barrierSetAssembler_x86.hpp
@@ -47,7 +47,7 @@ public:
virtual void load_at(MacroAssembler* masm, DecoratorSet decorators, BasicType type,
Register dst, Address src, Register tmp1, Register tmp_thread);
virtual void store_at(MacroAssembler* masm, DecoratorSet decorators, BasicType type,
- Address dst, Register val, Register tmp1, Register tmp2);
+ Address dst, Register val, Register tmp1, Register tmp2, Register tmp3);
// Support for jniFastGetField to try resolving a jobject/jweak in native
virtual void try_resolve_jobject_in_native(MacroAssembler* masm, Register jni_env,
diff --git a/src/hotspot/cpu/x86/gc/shared/cardTableBarrierSetAssembler_x86.cpp b/src/hotspot/cpu/x86/gc/shared/cardTableBarrierSetAssembler_x86.cpp
index 7fc36ffae8f0ba32a025bbf2cf81aa71eb85c378..f314cac5980b7f3c9e44ad888383a1139ab1c58d 100644
--- a/src/hotspot/cpu/x86/gc/shared/cardTableBarrierSetAssembler_x86.cpp
+++ b/src/hotspot/cpu/x86/gc/shared/cardTableBarrierSetAssembler_x86.cpp
@@ -128,7 +128,7 @@ void CardTableBarrierSetAssembler::store_check(MacroAssembler* masm, Register ob
}
void CardTableBarrierSetAssembler::oop_store_at(MacroAssembler* masm, DecoratorSet decorators, BasicType type,
- Address dst, Register val, Register tmp1, Register tmp2) {
+ Address dst, Register val, Register tmp1, Register tmp2, Register tmp3) {
bool in_heap = (decorators & IN_HEAP) != 0;
bool is_array = (decorators & IS_ARRAY) != 0;
@@ -137,7 +137,7 @@ void CardTableBarrierSetAssembler::oop_store_at(MacroAssembler* masm, DecoratorS
bool needs_post_barrier = val != noreg && in_heap;
- BarrierSetAssembler::store_at(masm, decorators, type, dst, val, noreg, noreg);
+ BarrierSetAssembler::store_at(masm, decorators, type, dst, val, noreg, noreg, noreg);
if (needs_post_barrier) {
// flatten object address if needed
if (!precise || (dst.index() == noreg && dst.disp() == 0)) {
diff --git a/src/hotspot/cpu/x86/gc/shared/cardTableBarrierSetAssembler_x86.hpp b/src/hotspot/cpu/x86/gc/shared/cardTableBarrierSetAssembler_x86.hpp
index a65286bd5996734f49e47f0f2137f27676d3c2f6..4760b222977a81b9e8febd93701786409860835d 100644
--- a/src/hotspot/cpu/x86/gc/shared/cardTableBarrierSetAssembler_x86.hpp
+++ b/src/hotspot/cpu/x86/gc/shared/cardTableBarrierSetAssembler_x86.hpp
@@ -35,7 +35,7 @@ protected:
virtual void gen_write_ref_array_post_barrier(MacroAssembler* masm, DecoratorSet decorators, Register addr, Register count, Register tmp);
virtual void oop_store_at(MacroAssembler* masm, DecoratorSet decorators, BasicType type,
- Address dst, Register val, Register tmp1, Register tmp2);
+ Address dst, Register val, Register tmp1, Register tmp2, Register tmp3);
};
#endif // CPU_X86_GC_SHARED_CARDTABLEBARRIERSETASSEMBLER_X86_HPP
diff --git a/src/hotspot/cpu/x86/gc/shared/modRefBarrierSetAssembler_x86.cpp b/src/hotspot/cpu/x86/gc/shared/modRefBarrierSetAssembler_x86.cpp
index 9325ab7ecf9c711605f1fe75d637782d2fecdcca..618095bdfa634b8c3a7cdccab5e280ae96668c1b 100644
--- a/src/hotspot/cpu/x86/gc/shared/modRefBarrierSetAssembler_x86.cpp
+++ b/src/hotspot/cpu/x86/gc/shared/modRefBarrierSetAssembler_x86.cpp
@@ -84,10 +84,10 @@ void ModRefBarrierSetAssembler::arraycopy_epilogue(MacroAssembler* masm, Decorat
}
void ModRefBarrierSetAssembler::store_at(MacroAssembler* masm, DecoratorSet decorators, BasicType type,
- Address dst, Register val, Register tmp1, Register tmp2) {
+ Address dst, Register val, Register tmp1, Register tmp2, Register tmp3) {
if (is_reference_type(type)) {
- oop_store_at(masm, decorators, type, dst, val, tmp1, tmp2);
+ oop_store_at(masm, decorators, type, dst, val, tmp1, tmp2, tmp3);
} else {
- BarrierSetAssembler::store_at(masm, decorators, type, dst, val, tmp1, tmp2);
+ BarrierSetAssembler::store_at(masm, decorators, type, dst, val, tmp1, tmp2, tmp3);
}
}
diff --git a/src/hotspot/cpu/x86/gc/shared/modRefBarrierSetAssembler_x86.hpp b/src/hotspot/cpu/x86/gc/shared/modRefBarrierSetAssembler_x86.hpp
index 39950225bfe736a71f7b6dc34a021e78357b110e..c8b5043256ad203bed7d16f7ead5e188c91cbb45 100644
--- a/src/hotspot/cpu/x86/gc/shared/modRefBarrierSetAssembler_x86.hpp
+++ b/src/hotspot/cpu/x86/gc/shared/modRefBarrierSetAssembler_x86.hpp
@@ -39,7 +39,7 @@ protected:
virtual void gen_write_ref_array_post_barrier(MacroAssembler* masm, DecoratorSet decorators,
Register addr, Register count, Register tmp) {}
virtual void oop_store_at(MacroAssembler* masm, DecoratorSet decorators, BasicType type,
- Address dst, Register val, Register tmp1, Register tmp2) = 0;
+ Address dst, Register val, Register tmp1, Register tmp2, Register tmp3) = 0;
public:
virtual void arraycopy_prologue(MacroAssembler* masm, DecoratorSet decorators, BasicType type,
Register src, Register dst, Register count);
@@ -47,7 +47,7 @@ public:
Register src, Register dst, Register count);
virtual void store_at(MacroAssembler* masm, DecoratorSet decorators, BasicType type,
- Address dst, Register val, Register tmp1, Register tmp2);
+ Address dst, Register val, Register tmp1, Register tmp2, Register tmp3);
};
#endif // CPU_X86_GC_SHARED_MODREFBARRIERSETASSEMBLER_X86_HPP
diff --git a/src/hotspot/cpu/x86/gc/shenandoah/shenandoahBarrierSetAssembler_x86.cpp b/src/hotspot/cpu/x86/gc/shenandoah/shenandoahBarrierSetAssembler_x86.cpp
index 64169b015293084fc4a9ec692c5d88977d38af6a..d213e6fda394e6796bddf81a2a22aef405026b66 100644
--- a/src/hotspot/cpu/x86/gc/shenandoah/shenandoahBarrierSetAssembler_x86.cpp
+++ b/src/hotspot/cpu/x86/gc/shenandoah/shenandoahBarrierSetAssembler_x86.cpp
@@ -591,7 +591,7 @@ void ShenandoahBarrierSetAssembler::load_at(MacroAssembler* masm, DecoratorSet d
}
void ShenandoahBarrierSetAssembler::store_at(MacroAssembler* masm, DecoratorSet decorators, BasicType type,
- Address dst, Register val, Register tmp1, Register tmp2) {
+ Address dst, Register val, Register tmp1, Register tmp2, Register tmp3) {
bool on_oop = is_reference_type(type);
bool in_heap = (decorators & IN_HEAP) != 0;
@@ -599,7 +599,6 @@ void ShenandoahBarrierSetAssembler::store_at(MacroAssembler* masm, DecoratorSet
if (on_oop && in_heap) {
bool needs_pre_barrier = as_normal;
- Register tmp3 = LP64_ONLY(r8) NOT_LP64(rsi);
Register rthread = LP64_ONLY(r15_thread) NOT_LP64(rcx);
// flatten object address if needed
// We do it regardless of precise because we need the registers
@@ -629,14 +628,14 @@ void ShenandoahBarrierSetAssembler::store_at(MacroAssembler* masm, DecoratorSet
false /* expand_call */);
}
if (val == noreg) {
- BarrierSetAssembler::store_at(masm, decorators, type, Address(tmp1, 0), val, noreg, noreg);
+ BarrierSetAssembler::store_at(masm, decorators, type, Address(tmp1, 0), val, noreg, noreg, noreg);
} else {
iu_barrier(masm, val, tmp3);
- BarrierSetAssembler::store_at(masm, decorators, type, Address(tmp1, 0), val, noreg, noreg);
+ BarrierSetAssembler::store_at(masm, decorators, type, Address(tmp1, 0), val, noreg, noreg, noreg);
}
NOT_LP64(imasm->restore_bcp());
} else {
- BarrierSetAssembler::store_at(masm, decorators, type, dst, val, tmp1, tmp2);
+ BarrierSetAssembler::store_at(masm, decorators, type, dst, val, tmp1, tmp2, tmp3);
}
}
diff --git a/src/hotspot/cpu/x86/gc/shenandoah/shenandoahBarrierSetAssembler_x86.hpp b/src/hotspot/cpu/x86/gc/shenandoah/shenandoahBarrierSetAssembler_x86.hpp
index 2a8c0862b9e6380c85c18570693e8e3d483fc655..47dfe1449280259f524318b1adf4fc4b573787c8 100644
--- a/src/hotspot/cpu/x86/gc/shenandoah/shenandoahBarrierSetAssembler_x86.hpp
+++ b/src/hotspot/cpu/x86/gc/shenandoah/shenandoahBarrierSetAssembler_x86.hpp
@@ -77,7 +77,7 @@ public:
virtual void load_at(MacroAssembler* masm, DecoratorSet decorators, BasicType type,
Register dst, Address src, Register tmp1, Register tmp_thread);
virtual void store_at(MacroAssembler* masm, DecoratorSet decorators, BasicType type,
- Address dst, Register val, Register tmp1, Register tmp2);
+ Address dst, Register val, Register tmp1, Register tmp2, Register tmp3);
virtual void try_resolve_jobject_in_native(MacroAssembler* masm, Register jni_env,
Register obj, Register tmp, Label& slowpath);
};
diff --git a/src/hotspot/cpu/x86/gc/z/zBarrierSetAssembler_x86.cpp b/src/hotspot/cpu/x86/gc/z/zBarrierSetAssembler_x86.cpp
index 3ffd3a2a85f4062164a413b982774574c81a46e5..00071d66da34166365cd8e10f56d832988295377 100644
--- a/src/hotspot/cpu/x86/gc/z/zBarrierSetAssembler_x86.cpp
+++ b/src/hotspot/cpu/x86/gc/z/zBarrierSetAssembler_x86.cpp
@@ -193,7 +193,8 @@ void ZBarrierSetAssembler::store_at(MacroAssembler* masm,
Address dst,
Register src,
Register tmp1,
- Register tmp2) {
+ Register tmp2,
+ Register tmp3) {
BLOCK_COMMENT("ZBarrierSetAssembler::store_at {");
// Verify oop store
@@ -211,7 +212,7 @@ void ZBarrierSetAssembler::store_at(MacroAssembler* masm,
}
// Store value
- BarrierSetAssembler::store_at(masm, decorators, type, dst, src, tmp1, tmp2);
+ BarrierSetAssembler::store_at(masm, decorators, type, dst, src, tmp1, tmp2, tmp3);
BLOCK_COMMENT("} ZBarrierSetAssembler::store_at");
}
@@ -452,7 +453,7 @@ private:
void opmask_register_save(KRegister reg) {
_spill_offset -= 8;
- __ kmovql(Address(rsp, _spill_offset), reg);
+ __ kmov(Address(rsp, _spill_offset), reg);
}
void gp_register_restore(Register reg) {
@@ -461,7 +462,7 @@ private:
}
void opmask_register_restore(KRegister reg) {
- __ kmovql(reg, Address(rsp, _spill_offset));
+ __ kmov(reg, Address(rsp, _spill_offset));
_spill_offset += 8;
}
diff --git a/src/hotspot/cpu/x86/gc/z/zBarrierSetAssembler_x86.hpp b/src/hotspot/cpu/x86/gc/z/zBarrierSetAssembler_x86.hpp
index 134f7e6c9e2e5951a63a8a816c3d1ddd311332df..2446bd1e46a73357d47f9fda7a95828d9a13df8a 100644
--- a/src/hotspot/cpu/x86/gc/z/zBarrierSetAssembler_x86.hpp
+++ b/src/hotspot/cpu/x86/gc/z/zBarrierSetAssembler_x86.hpp
@@ -61,7 +61,8 @@ public:
Address dst,
Register src,
Register tmp1,
- Register tmp2);
+ Register tmp2,
+ Register tmp3);
#endif // ASSERT
virtual void arraycopy_prologue(MacroAssembler* masm,
diff --git a/src/hotspot/cpu/x86/interp_masm_x86.cpp b/src/hotspot/cpu/x86/interp_masm_x86.cpp
index bf8b94a6319dbacfe7a14d0228fb3fb50689a3f8..34d4178b8da4c39e4412428ddb1cb222098532ce 100644
--- a/src/hotspot/cpu/x86/interp_masm_x86.cpp
+++ b/src/hotspot/cpu/x86/interp_masm_x86.cpp
@@ -1972,19 +1972,18 @@ void InterpreterMacroAssembler::verify_FPU(int stack_depth, TosState state) {
#endif
}
-// Jump if ((*counter_addr += increment) & mask) satisfies the condition.
-void InterpreterMacroAssembler::increment_mask_and_jump(Address counter_addr,
- int increment, Address mask,
- Register scratch, bool preloaded,
- Condition cond, Label* where) {
- if (!preloaded) {
- movl(scratch, counter_addr);
- }
- incrementl(scratch, increment);
+// Jump if ((*counter_addr += increment) & mask) == 0
+void InterpreterMacroAssembler::increment_mask_and_jump(Address counter_addr, Address mask,
+ Register scratch, Label* where) {
+ // This update is actually not atomic and can lose a number of updates
+ // under heavy contention, but the alternative of using the (contended)
+ // atomic update here penalizes profiling paths too much.
+ movl(scratch, counter_addr);
+ incrementl(scratch, InvocationCounter::count_increment);
movl(counter_addr, scratch);
andl(scratch, mask);
if (where != NULL) {
- jcc(cond, *where);
+ jcc(Assembler::zero, *where);
}
}
diff --git a/src/hotspot/cpu/x86/interp_masm_x86.hpp b/src/hotspot/cpu/x86/interp_masm_x86.hpp
index 0aecb6b4a25e6bf47d3876060c7940e7bb276003..a94f35426b8bcaa186f7e3b54c8c8314fc4e59d5 100644
--- a/src/hotspot/cpu/x86/interp_masm_x86.hpp
+++ b/src/hotspot/cpu/x86/interp_masm_x86.hpp
@@ -248,10 +248,8 @@ class InterpreterMacroAssembler: public MacroAssembler {
bool decrement = false);
void increment_mdp_data_at(Register mdp_in, Register reg, int constant,
bool decrement = false);
- void increment_mask_and_jump(Address counter_addr,
- int increment, Address mask,
- Register scratch, bool preloaded,
- Condition cond, Label* where);
+ void increment_mask_and_jump(Address counter_addr, Address mask,
+ Register scratch, Label* where);
void set_mdp_flag_at(Register mdp_in, int flag_constant);
void test_mdp_data_at(Register mdp_in, int offset, Register value,
Register test_value_out,
diff --git a/src/hotspot/cpu/x86/macroAssembler_x86.cpp b/src/hotspot/cpu/x86/macroAssembler_x86.cpp
index 10a1cb4b6a1a0c094566558413e86fbdc0a9beff..e9285b11e42b6facfbb87cb7588691c3e13d7146 100644
--- a/src/hotspot/cpu/x86/macroAssembler_x86.cpp
+++ b/src/hotspot/cpu/x86/macroAssembler_x86.cpp
@@ -26,6 +26,7 @@
#include "jvm.h"
#include "asm/assembler.hpp"
#include "asm/assembler.inline.hpp"
+#include "c1/c1_FrameMap.hpp"
#include "compiler/compiler_globals.hpp"
#include "compiler/disassembler.hpp"
#include "gc/shared/barrierSet.hpp"
@@ -332,21 +333,6 @@ void MacroAssembler::movptr(Address dst, intptr_t src) {
movl(dst, src);
}
-
-void MacroAssembler::pop_callee_saved_registers() {
- pop(rcx);
- pop(rdx);
- pop(rdi);
- pop(rsi);
-}
-
-void MacroAssembler::push_callee_saved_registers() {
- push(rsi);
- push(rdi);
- push(rdx);
- push(rcx);
-}
-
void MacroAssembler::pushoop(jobject obj) {
push_literal32((int32_t)obj, oop_Relocation::spec_for_immediate());
}
@@ -3593,6 +3579,190 @@ void MacroAssembler::tlab_allocate(Register thread, Register obj,
bs->tlab_allocate(this, thread, obj, var_size_in_bytes, con_size_in_bytes, t1, t2, slow_case);
}
+RegSet MacroAssembler::call_clobbered_gp_registers() {
+ RegSet regs;
+#ifdef _LP64
+ regs += RegSet::of(rax, rcx, rdx);
+#ifndef WINDOWS
+ regs += RegSet::of(rsi, rdi);
+#endif
+ regs += RegSet::range(r8, r11);
+#else
+ regs += RegSet::of(rax, rcx, rdx);
+#endif
+ return regs;
+}
+
+XMMRegSet MacroAssembler::call_clobbered_xmm_registers() {
+#if defined(WINDOWS) && defined(_LP64)
+ XMMRegSet result = XMMRegSet::range(xmm0, xmm5);
+ if (FrameMap::get_num_caller_save_xmms() > 16) {
+ result += XMMRegSet::range(xmm16, as_XMMRegister(FrameMap::get_num_caller_save_xmms() - 1));
+ }
+ return result;
+#else
+ return XMMRegSet::range(xmm0, as_XMMRegister(FrameMap::get_num_caller_save_xmms() - 1));
+#endif
+}
+
+static int FPUSaveAreaSize = align_up(108, StackAlignmentInBytes); // 108 bytes needed for FPU state by fsave/frstor
+
+#ifndef _LP64
+static bool use_x87_registers() { return UseSSE < 2; }
+#endif
+static bool use_xmm_registers() { return UseSSE >= 1; }
+
+// C1 only ever uses the first double/float of the XMM register.
+static int xmm_save_size() { return UseSSE >= 2 ? sizeof(double) : sizeof(float); }
+
+static void save_xmm_register(MacroAssembler* masm, int offset, XMMRegister reg) {
+ if (UseSSE == 1) {
+ masm->movflt(Address(rsp, offset), reg);
+ } else {
+ masm->movdbl(Address(rsp, offset), reg);
+ }
+}
+
+static void restore_xmm_register(MacroAssembler* masm, int offset, XMMRegister reg) {
+ if (UseSSE == 1) {
+ masm->movflt(reg, Address(rsp, offset));
+ } else {
+ masm->movdbl(reg, Address(rsp, offset));
+ }
+}
+
+int register_section_sizes(RegSet gp_registers, XMMRegSet xmm_registers, bool save_fpu,
+ int& gp_area_size, int& fp_area_size, int& xmm_area_size) {
+
+ gp_area_size = align_up(gp_registers.size() * RegisterImpl::max_slots_per_register * VMRegImpl::stack_slot_size,
+ StackAlignmentInBytes);
+#ifdef _LP64
+ fp_area_size = 0;
+#else
+ fp_area_size = (save_fpu && use_x87_registers()) ? FPUSaveAreaSize : 0;
+#endif
+ xmm_area_size = (save_fpu && use_xmm_registers()) ? xmm_registers.size() * xmm_save_size() : 0;
+
+ return gp_area_size + fp_area_size + xmm_area_size;
+}
+
+void MacroAssembler::push_call_clobbered_registers_except(RegSet exclude, bool save_fpu) {
+ block_comment("push_call_clobbered_registers start");
+ // Regular registers
+ RegSet gp_registers_to_push = call_clobbered_gp_registers() - exclude;
+
+ int gp_area_size;
+ int fp_area_size;
+ int xmm_area_size;
+ int total_save_size = register_section_sizes(gp_registers_to_push, call_clobbered_xmm_registers(), save_fpu,
+ gp_area_size, fp_area_size, xmm_area_size);
+ subptr(rsp, total_save_size);
+
+ push_set(gp_registers_to_push, 0);
+
+#ifndef _LP64
+ if (save_fpu && use_x87_registers()) {
+ fnsave(Address(rsp, gp_area_size));
+ fwait();
+ }
+#endif
+ if (save_fpu && use_xmm_registers()) {
+ push_set(call_clobbered_xmm_registers(), gp_area_size + fp_area_size);
+ }
+
+ block_comment("push_call_clobbered_registers end");
+}
+
+void MacroAssembler::pop_call_clobbered_registers_except(RegSet exclude, bool restore_fpu) {
+ block_comment("pop_call_clobbered_registers start");
+
+ RegSet gp_registers_to_pop = call_clobbered_gp_registers() - exclude;
+
+ int gp_area_size;
+ int fp_area_size;
+ int xmm_area_size;
+ int total_save_size = register_section_sizes(gp_registers_to_pop, call_clobbered_xmm_registers(), restore_fpu,
+ gp_area_size, fp_area_size, xmm_area_size);
+
+ if (restore_fpu && use_xmm_registers()) {
+ pop_set(call_clobbered_xmm_registers(), gp_area_size + fp_area_size);
+ }
+#ifndef _LP64
+ if (restore_fpu && use_x87_registers()) {
+ frstor(Address(rsp, gp_area_size));
+ }
+#endif
+
+ pop_set(gp_registers_to_pop, 0);
+
+ addptr(rsp, total_save_size);
+
+ vzeroupper();
+
+ block_comment("pop_call_clobbered_registers end");
+}
+
+void MacroAssembler::push_set(XMMRegSet set, int offset) {
+ assert(is_aligned(set.size() * xmm_save_size(), StackAlignmentInBytes), "must be");
+ int spill_offset = offset;
+
+ for (RegSetIterator it = set.begin(); *it != xnoreg; ++it) {
+ save_xmm_register(this, spill_offset, *it);
+ spill_offset += xmm_save_size();
+ }
+}
+
+void MacroAssembler::pop_set(XMMRegSet set, int offset) {
+ int restore_size = set.size() * xmm_save_size();
+ assert(is_aligned(restore_size, StackAlignmentInBytes), "must be");
+
+ int restore_offset = offset + restore_size - xmm_save_size();
+
+ for (ReverseRegSetIterator it = set.rbegin(); *it != xnoreg; ++it) {
+ restore_xmm_register(this, restore_offset, *it);
+ restore_offset -= xmm_save_size();
+ }
+}
+
+void MacroAssembler::push_set(RegSet set, int offset) {
+ int spill_offset;
+ if (offset == -1) {
+ int register_push_size = set.size() * RegisterImpl::max_slots_per_register * VMRegImpl::stack_slot_size;
+ int aligned_size = align_up(register_push_size, StackAlignmentInBytes);
+ subptr(rsp, aligned_size);
+ spill_offset = 0;
+ } else {
+ spill_offset = offset;
+ }
+
+ for (RegSetIterator it = set.begin(); *it != noreg; ++it) {
+ movptr(Address(rsp, spill_offset), *it);
+ spill_offset += RegisterImpl::max_slots_per_register * VMRegImpl::stack_slot_size;
+ }
+}
+
+void MacroAssembler::pop_set(RegSet set, int offset) {
+
+ int gp_reg_size = RegisterImpl::max_slots_per_register * VMRegImpl::stack_slot_size;
+ int restore_size = set.size() * gp_reg_size;
+ int aligned_size = align_up(restore_size, StackAlignmentInBytes);
+
+ int restore_offset;
+ if (offset == -1) {
+ restore_offset = restore_size - gp_reg_size;
+ } else {
+ restore_offset = offset + restore_size - gp_reg_size;
+ }
+ for (ReverseRegSetIterator it = set.rbegin(); *it != noreg; ++it) {
+ movptr(*it, Address(rsp, restore_offset));
+ restore_offset -= gp_reg_size;
+ }
+
+ if (offset == -1) {
+ addptr(rsp, aligned_size);
+ }
+}
+
// Defines obj, preserves var_size_in_bytes
void MacroAssembler::eden_allocate(Register thread, Register obj,
Register var_size_in_bytes,
@@ -4605,14 +4775,14 @@ void MacroAssembler::access_load_at(BasicType type, DecoratorSet decorators, Reg
}
void MacroAssembler::access_store_at(BasicType type, DecoratorSet decorators, Address dst, Register src,
- Register tmp1, Register tmp2) {
+ Register tmp1, Register tmp2, Register tmp3) {
BarrierSetAssembler* bs = BarrierSet::barrier_set()->barrier_set_assembler();
decorators = AccessInternal::decorator_fixup(decorators);
bool as_raw = (decorators & AS_RAW) != 0;
if (as_raw) {
- bs->BarrierSetAssembler::store_at(this, decorators, type, dst, src, tmp1, tmp2);
+ bs->BarrierSetAssembler::store_at(this, decorators, type, dst, src, tmp1, tmp2, tmp3);
} else {
- bs->store_at(this, decorators, type, dst, src, tmp1, tmp2);
+ bs->store_at(this, decorators, type, dst, src, tmp1, tmp2, tmp3);
}
}
@@ -4628,13 +4798,13 @@ void MacroAssembler::load_heap_oop_not_null(Register dst, Address src, Register
}
void MacroAssembler::store_heap_oop(Address dst, Register src, Register tmp1,
- Register tmp2, DecoratorSet decorators) {
- access_store_at(T_OBJECT, IN_HEAP | decorators, dst, src, tmp1, tmp2);
+ Register tmp2, Register tmp3, DecoratorSet decorators) {
+ access_store_at(T_OBJECT, IN_HEAP | decorators, dst, src, tmp1, tmp2, tmp3);
}
// Used for storing NULLs.
void MacroAssembler::store_heap_oop_null(Address dst) {
- access_store_at(T_OBJECT, IN_HEAP, dst, noreg, noreg, noreg);
+ access_store_at(T_OBJECT, IN_HEAP, dst, noreg, noreg, noreg, noreg);
}
#ifdef _LP64
diff --git a/src/hotspot/cpu/x86/macroAssembler_x86.hpp b/src/hotspot/cpu/x86/macroAssembler_x86.hpp
index 3593874866ca81157b1fa49f975034d866383443..9b3da9d5de15cce9a36f1ac9438c47f715701f44 100644
--- a/src/hotspot/cpu/x86/macroAssembler_x86.hpp
+++ b/src/hotspot/cpu/x86/macroAssembler_x86.hpp
@@ -26,6 +26,7 @@
#define CPU_X86_MACROASSEMBLER_X86_HPP
#include "asm/assembler.hpp"
+#include "asm/register.hpp"
#include "code/vmreg.inline.hpp"
#include "compiler/oopMap.hpp"
#include "utilities/macros.hpp"
@@ -345,14 +346,14 @@ class MacroAssembler: public Assembler {
void access_load_at(BasicType type, DecoratorSet decorators, Register dst, Address src,
Register tmp1, Register thread_tmp);
void access_store_at(BasicType type, DecoratorSet decorators, Address dst, Register src,
- Register tmp1, Register tmp2);
+ Register tmp1, Register tmp2, Register tmp3);
void load_heap_oop(Register dst, Address src, Register tmp1 = noreg,
Register thread_tmp = noreg, DecoratorSet decorators = 0);
void load_heap_oop_not_null(Register dst, Address src, Register tmp1 = noreg,
Register thread_tmp = noreg, DecoratorSet decorators = 0);
void store_heap_oop(Address dst, Register src, Register tmp1 = noreg,
- Register tmp2 = noreg, DecoratorSet decorators = 0);
+ Register tmp2 = noreg, Register tmp3 = noreg, DecoratorSet decorators = 0);
// Used for storing NULL. All other oop constants should be
// stored using routines that take a jobject.
@@ -521,9 +522,34 @@ class MacroAssembler: public Assembler {
// Round up to a power of two
void round_to(Register reg, int modulus);
- // Callee saved registers handling
- void push_callee_saved_registers();
- void pop_callee_saved_registers();
+private:
+ // General purpose and XMM registers potentially clobbered by native code; there
+ // is no need for FPU or AVX opmask related methods because C1/interpreter
+ // - we save/restore FPU state as a whole always
+ // - do not care about AVX-512 opmask
+ static RegSet call_clobbered_gp_registers();
+ static XMMRegSet call_clobbered_xmm_registers();
+
+ void push_set(XMMRegSet set, int offset);
+ void pop_set(XMMRegSet set, int offset);
+
+public:
+ void push_set(RegSet set, int offset = -1);
+ void pop_set(RegSet set, int offset = -1);
+
+ // Push and pop everything that might be clobbered by a native
+ // runtime call.
+ // Only save the lower 64 bits of each vector register.
+ // Additonal registers can be excluded in a passed RegSet.
+ void push_call_clobbered_registers_except(RegSet exclude, bool save_fpu = true);
+ void pop_call_clobbered_registers_except(RegSet exclude, bool restore_fpu = true);
+
+ void push_call_clobbered_registers(bool save_fpu = true) {
+ push_call_clobbered_registers_except(RegSet(), save_fpu);
+ }
+ void pop_call_clobbered_registers(bool restore_fpu = true) {
+ pop_call_clobbered_registers_except(RegSet(), restore_fpu);
+ }
// allocation
void eden_allocate(
diff --git a/src/hotspot/cpu/x86/matcher_x86.hpp b/src/hotspot/cpu/x86/matcher_x86.hpp
index 61af24cf31c52261cbb2fd7149c880f1a22373fb..9711bc8c2c4368d36d616aacb13c54511ac67fe7 100644
--- a/src/hotspot/cpu/x86/matcher_x86.hpp
+++ b/src/hotspot/cpu/x86/matcher_x86.hpp
@@ -183,4 +183,13 @@
// Implements a variant of EncodeISOArrayNode that encode ASCII only
static const bool supports_encode_ascii_array = true;
+ // Returns pre-selection estimated cost of a vector operation.
+ static int vector_op_pre_select_sz_estimate(int vopc, BasicType ety, int vlen) {
+ switch(vopc) {
+ default: return 0;
+ case Op_PopCountVI: return VM_Version::supports_avx512_vpopcntdq() ? 0 : 50;
+ case Op_PopCountVL: return VM_Version::supports_avx512_vpopcntdq() ? 0 : 40;
+ }
+ }
+
#endif // CPU_X86_MATCHER_X86_HPP
diff --git a/src/hotspot/cpu/x86/register_definitions_x86.cpp b/src/hotspot/cpu/x86/register_definitions_x86.cpp
deleted file mode 100644
index 07466930ffac0df7d3b0e60474e26948613c71e7..0000000000000000000000000000000000000000
--- a/src/hotspot/cpu/x86/register_definitions_x86.cpp
+++ /dev/null
@@ -1,143 +0,0 @@
-/*
- * Copyright (c) 2002, 2016, Oracle and/or its affiliates. All rights reserved.
- * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
- *
- * This code is free software; you can redistribute it and/or modify it
- * under the terms of the GNU General Public License version 2 only, as
- * published by the Free Software Foundation.
- *
- * This code is distributed in the hope that it will be useful, but WITHOUT
- * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
- * version 2 for more details (a copy is included in the LICENSE file that
- * accompanied this code).
- *
- * You should have received a copy of the GNU General Public License version
- * 2 along with this work; if not, write to the Free Software Foundation,
- * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
- *
- * Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA
- * or visit www.oracle.com if you need additional information or have any
- * questions.
- *
- */
-
-#include "precompiled.hpp"
-#include "asm/assembler.hpp"
-#include "asm/register.hpp"
-#include "register_x86.hpp"
-#include "interp_masm_x86.hpp"
-
-REGISTER_DEFINITION(Register, noreg);
-REGISTER_DEFINITION(Register, rax);
-REGISTER_DEFINITION(Register, rcx);
-REGISTER_DEFINITION(Register, rdx);
-REGISTER_DEFINITION(Register, rbx);
-REGISTER_DEFINITION(Register, rsp);
-REGISTER_DEFINITION(Register, rbp);
-REGISTER_DEFINITION(Register, rsi);
-REGISTER_DEFINITION(Register, rdi);
-#ifdef AMD64
-REGISTER_DEFINITION(Register, r8);
-REGISTER_DEFINITION(Register, r9);
-REGISTER_DEFINITION(Register, r10);
-REGISTER_DEFINITION(Register, r11);
-REGISTER_DEFINITION(Register, r12);
-REGISTER_DEFINITION(Register, r13);
-REGISTER_DEFINITION(Register, r14);
-REGISTER_DEFINITION(Register, r15);
-#endif // AMD64
-
-REGISTER_DEFINITION(FloatRegister, fnoreg);
-
-REGISTER_DEFINITION(XMMRegister, xnoreg);
-REGISTER_DEFINITION(XMMRegister, xmm0 );
-REGISTER_DEFINITION(XMMRegister, xmm1 );
-REGISTER_DEFINITION(XMMRegister, xmm2 );
-REGISTER_DEFINITION(XMMRegister, xmm3 );
-REGISTER_DEFINITION(XMMRegister, xmm4 );
-REGISTER_DEFINITION(XMMRegister, xmm5 );
-REGISTER_DEFINITION(XMMRegister, xmm6 );
-REGISTER_DEFINITION(XMMRegister, xmm7 );
-#ifdef AMD64
-REGISTER_DEFINITION(XMMRegister, xmm8);
-REGISTER_DEFINITION(XMMRegister, xmm9);
-REGISTER_DEFINITION(XMMRegister, xmm10);
-REGISTER_DEFINITION(XMMRegister, xmm11);
-REGISTER_DEFINITION(XMMRegister, xmm12);
-REGISTER_DEFINITION(XMMRegister, xmm13);
-REGISTER_DEFINITION(XMMRegister, xmm14);
-REGISTER_DEFINITION(XMMRegister, xmm15);
-REGISTER_DEFINITION(XMMRegister, xmm16);
-REGISTER_DEFINITION(XMMRegister, xmm17);
-REGISTER_DEFINITION(XMMRegister, xmm18);
-REGISTER_DEFINITION(XMMRegister, xmm19);
-REGISTER_DEFINITION(XMMRegister, xmm20);
-REGISTER_DEFINITION(XMMRegister, xmm21);
-REGISTER_DEFINITION(XMMRegister, xmm22);
-REGISTER_DEFINITION(XMMRegister, xmm23);
-REGISTER_DEFINITION(XMMRegister, xmm24);
-REGISTER_DEFINITION(XMMRegister, xmm25);
-REGISTER_DEFINITION(XMMRegister, xmm26);
-REGISTER_DEFINITION(XMMRegister, xmm27);
-REGISTER_DEFINITION(XMMRegister, xmm28);
-REGISTER_DEFINITION(XMMRegister, xmm29);
-REGISTER_DEFINITION(XMMRegister, xmm30);
-REGISTER_DEFINITION(XMMRegister, xmm31);
-
-REGISTER_DEFINITION(Register, c_rarg0);
-REGISTER_DEFINITION(Register, c_rarg1);
-REGISTER_DEFINITION(Register, c_rarg2);
-REGISTER_DEFINITION(Register, c_rarg3);
-
-REGISTER_DEFINITION(XMMRegister, c_farg0);
-REGISTER_DEFINITION(XMMRegister, c_farg1);
-REGISTER_DEFINITION(XMMRegister, c_farg2);
-REGISTER_DEFINITION(XMMRegister, c_farg3);
-
-// Non windows OS's have a few more argument registers
-#ifndef _WIN64
-REGISTER_DEFINITION(Register, c_rarg4);
-REGISTER_DEFINITION(Register, c_rarg5);
-
-REGISTER_DEFINITION(XMMRegister, c_farg4);
-REGISTER_DEFINITION(XMMRegister, c_farg5);
-REGISTER_DEFINITION(XMMRegister, c_farg6);
-REGISTER_DEFINITION(XMMRegister, c_farg7);
-#endif /* _WIN64 */
-
-REGISTER_DEFINITION(Register, j_rarg0);
-REGISTER_DEFINITION(Register, j_rarg1);
-REGISTER_DEFINITION(Register, j_rarg2);
-REGISTER_DEFINITION(Register, j_rarg3);
-REGISTER_DEFINITION(Register, j_rarg4);
-REGISTER_DEFINITION(Register, j_rarg5);
-
-REGISTER_DEFINITION(XMMRegister, j_farg0);
-REGISTER_DEFINITION(XMMRegister, j_farg1);
-REGISTER_DEFINITION(XMMRegister, j_farg2);
-REGISTER_DEFINITION(XMMRegister, j_farg3);
-REGISTER_DEFINITION(XMMRegister, j_farg4);
-REGISTER_DEFINITION(XMMRegister, j_farg5);
-REGISTER_DEFINITION(XMMRegister, j_farg6);
-REGISTER_DEFINITION(XMMRegister, j_farg7);
-
-REGISTER_DEFINITION(Register, rscratch1);
-REGISTER_DEFINITION(Register, rscratch2);
-
-REGISTER_DEFINITION(Register, r12_heapbase);
-REGISTER_DEFINITION(Register, r15_thread);
-#endif // AMD64
-
-REGISTER_DEFINITION(KRegister, knoreg);
-REGISTER_DEFINITION(KRegister, k0);
-REGISTER_DEFINITION(KRegister, k1);
-REGISTER_DEFINITION(KRegister, k2);
-REGISTER_DEFINITION(KRegister, k3);
-REGISTER_DEFINITION(KRegister, k4);
-REGISTER_DEFINITION(KRegister, k5);
-REGISTER_DEFINITION(KRegister, k6);
-REGISTER_DEFINITION(KRegister, k7);
-
-// JSR 292
-REGISTER_DEFINITION(Register, rbp_mh_SP_save);
diff --git a/src/hotspot/cpu/x86/register_x86.hpp b/src/hotspot/cpu/x86/register_x86.hpp
index b9ac28902407560b1d03df290b731b4f407ede7c..f57b1db48c838e86ddddacca263f8d2662f244c8 100644
--- a/src/hotspot/cpu/x86/register_x86.hpp
+++ b/src/hotspot/cpu/x86/register_x86.hpp
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2000, 2021, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2000, 2022, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -26,6 +26,8 @@
#define CPU_X86_REGISTER_X86_HPP
#include "asm/register.hpp"
+#include "utilities/count_leading_zeros.hpp"
+#include "utilities/powerOfTwo.hpp"
class VMRegImpl;
typedef VMRegImpl* VMReg;
@@ -135,7 +137,7 @@ inline XMMRegister as_XMMRegister(int encoding) {
}
-// The implementation of XMM registers for the IA32 architecture
+// The implementation of XMM registers.
class XMMRegisterImpl: public AbstractRegisterImpl {
public:
enum {
@@ -201,11 +203,7 @@ CONSTANT_REGISTER_DECLARATION(XMMRegister, xmm30, (30));
CONSTANT_REGISTER_DECLARATION(XMMRegister, xmm31, (31));
#endif // AMD64
-// Only used by the 32bit stubGenerator. These can't be described by vmreg and hence
-// can't be described in oopMaps and therefore can't be used by the compilers (at least
-// were deopt might wan't to see them).
-
-// Use XMMRegister as shortcut
+// Use KRegister as shortcut
class KRegisterImpl;
typedef KRegisterImpl* KRegister;
@@ -213,7 +211,7 @@ inline KRegister as_KRegister(int encoding) {
return (KRegister)(intptr_t)encoding;
}
-// The implementation of XMM registers for the IA32 architecture
+// The implementation of AVX-3 (AVX-512) opmask registers.
class KRegisterImpl : public AbstractRegisterImpl {
public:
enum {
@@ -276,4 +274,33 @@ class ConcreteRegisterImpl : public AbstractRegisterImpl {
};
+template <>
+inline Register AbstractRegSet::first() {
+ uint32_t first = _bitset & -_bitset;
+ return first ? as_Register(exact_log2(first)) : noreg;
+}
+
+template <>
+inline Register AbstractRegSet::last() {
+ if (_bitset == 0) { return noreg; }
+ uint32_t last = 31 - count_leading_zeros(_bitset);
+ return as_Register(last);
+}
+
+template <>
+inline XMMRegister AbstractRegSet::first() {
+ uint32_t first = _bitset & -_bitset;
+ return first ? as_XMMRegister(exact_log2(first)) : xnoreg;
+}
+
+template <>
+inline XMMRegister AbstractRegSet::last() {
+ if (_bitset == 0) { return xnoreg; }
+ uint32_t last = 31 - count_leading_zeros(_bitset);
+ return as_XMMRegister(last);
+}
+
+typedef AbstractRegSet RegSet;
+typedef AbstractRegSet XMMRegSet;
+
#endif // CPU_X86_REGISTER_X86_HPP
diff --git a/src/hotspot/cpu/x86/sharedRuntime_x86_64.cpp b/src/hotspot/cpu/x86/sharedRuntime_x86_64.cpp
index 6597c91bb42ff11d09b4c5c741a35bd0b27f8d25..8bfbe3303da9e9a8f1dc0604cdbaaf6329d780c5 100644
--- a/src/hotspot/cpu/x86/sharedRuntime_x86_64.cpp
+++ b/src/hotspot/cpu/x86/sharedRuntime_x86_64.cpp
@@ -3002,7 +3002,7 @@ RuntimeStub* SharedRuntime::generate_resolve_blob(address destination, const cha
// allocate space for the code
ResourceMark rm;
- CodeBuffer buffer(name, 1000, 512);
+ CodeBuffer buffer(name, 1200, 512);
MacroAssembler* masm = new MacroAssembler(&buffer);
int frame_size_in_words;
diff --git a/src/hotspot/cpu/x86/stubGenerator_x86_32.cpp b/src/hotspot/cpu/x86/stubGenerator_x86_32.cpp
index 1525d10e5b5f3dea743318ca097dd1b3098dd6f5..24cfc237b23591e90d3633e26529c4c1b5051d09 100644
--- a/src/hotspot/cpu/x86/stubGenerator_x86_32.cpp
+++ b/src/hotspot/cpu/x86/stubGenerator_x86_32.cpp
@@ -588,6 +588,30 @@ class StubGenerator: public StubCodeGenerator {
return start;
}
+ address generate_popcount_avx_lut(const char *stub_name) {
+ __ align64();
+ StubCodeMark mark(this, "StubRoutines", stub_name);
+ address start = __ pc();
+ __ emit_data(0x02010100, relocInfo::none, 0);
+ __ emit_data(0x03020201, relocInfo::none, 0);
+ __ emit_data(0x03020201, relocInfo::none, 0);
+ __ emit_data(0x04030302, relocInfo::none, 0);
+ __ emit_data(0x02010100, relocInfo::none, 0);
+ __ emit_data(0x03020201, relocInfo::none, 0);
+ __ emit_data(0x03020201, relocInfo::none, 0);
+ __ emit_data(0x04030302, relocInfo::none, 0);
+ __ emit_data(0x02010100, relocInfo::none, 0);
+ __ emit_data(0x03020201, relocInfo::none, 0);
+ __ emit_data(0x03020201, relocInfo::none, 0);
+ __ emit_data(0x04030302, relocInfo::none, 0);
+ __ emit_data(0x02010100, relocInfo::none, 0);
+ __ emit_data(0x03020201, relocInfo::none, 0);
+ __ emit_data(0x03020201, relocInfo::none, 0);
+ __ emit_data(0x04030302, relocInfo::none, 0);
+ return start;
+ }
+
+
address generate_iota_indices(const char *stub_name) {
__ align(CodeEntryAlignment);
StubCodeMark mark(this, "StubRoutines", stub_name);
@@ -4004,6 +4028,11 @@ class StubGenerator: public StubCodeGenerator {
StubRoutines::x86::_vector_int_mask_cmp_bits = generate_vector_mask("vector_int_mask_cmp_bits", 0x00000001);
StubRoutines::x86::_vector_iota_indices = generate_iota_indices("iota_indices");
+ if (UsePopCountInstruction && VM_Version::supports_avx2() && !VM_Version::supports_avx512_vpopcntdq()) {
+ // lut implementation influenced by counting 1s algorithm from section 5-1 of Hackers' Delight.
+ StubRoutines::x86::_vector_popcount_lut = generate_popcount_avx_lut("popcount_lut");
+ }
+
// support for verify_oop (must happen after universe_init)
StubRoutines::_verify_oop_subroutine_entry = generate_verify_oop();
diff --git a/src/hotspot/cpu/x86/stubGenerator_x86_64.cpp b/src/hotspot/cpu/x86/stubGenerator_x86_64.cpp
index 8b7188ca42c88ff68959b723b177f950d8fa87bc..39d5cbe2fb4638c4e5104bc53c86a13848e85b4c 100644
--- a/src/hotspot/cpu/x86/stubGenerator_x86_64.cpp
+++ b/src/hotspot/cpu/x86/stubGenerator_x86_64.cpp
@@ -795,6 +795,21 @@ class StubGenerator: public StubCodeGenerator {
return start;
}
+ address generate_popcount_avx_lut(const char *stub_name) {
+ __ align64();
+ StubCodeMark mark(this, "StubRoutines", stub_name);
+ address start = __ pc();
+ __ emit_data64(0x0302020102010100, relocInfo::none);
+ __ emit_data64(0x0403030203020201, relocInfo::none);
+ __ emit_data64(0x0302020102010100, relocInfo::none);
+ __ emit_data64(0x0403030203020201, relocInfo::none);
+ __ emit_data64(0x0302020102010100, relocInfo::none);
+ __ emit_data64(0x0403030203020201, relocInfo::none);
+ __ emit_data64(0x0302020102010100, relocInfo::none);
+ __ emit_data64(0x0403030203020201, relocInfo::none);
+ return start;
+ }
+
address generate_iota_indices(const char *stub_name) {
__ align(CodeEntryAlignment);
StubCodeMark mark(this, "StubRoutines", stub_name);
@@ -2833,7 +2848,7 @@ class StubGenerator: public StubCodeGenerator {
__ align(OptoLoopAlignment);
__ BIND(L_store_element);
- __ store_heap_oop(to_element_addr, rax_oop, noreg, noreg, AS_RAW); // store the oop
+ __ store_heap_oop(to_element_addr, rax_oop, noreg, noreg, noreg, AS_RAW); // store the oop
__ increment(count); // increment the count toward zero
__ jcc(Assembler::zero, L_do_card_marks);
@@ -7713,6 +7728,11 @@ address generate_avx_ghash_processBlocks() {
StubRoutines::x86::_vector_long_sign_mask = generate_vector_mask("vector_long_sign_mask", 0x8000000000000000);
StubRoutines::x86::_vector_iota_indices = generate_iota_indices("iota_indices");
+ if (UsePopCountInstruction && VM_Version::supports_avx2() && !VM_Version::supports_avx512_vpopcntdq()) {
+ // lut implementation influenced by counting 1s algorithm from section 5-1 of Hackers' Delight.
+ StubRoutines::x86::_vector_popcount_lut = generate_popcount_avx_lut("popcount_lut");
+ }
+
// support for verify_oop (must happen after universe_init)
if (VerifyOops) {
StubRoutines::_verify_oop_subroutine_entry = generate_verify_oop();
diff --git a/src/hotspot/cpu/x86/stubRoutines_x86.cpp b/src/hotspot/cpu/x86/stubRoutines_x86.cpp
index 81362c76bd69472828ffcac5040d3d8a5dd1fcd8..f5a0eb623d0d269c89bd905fee044ff47435a5e1 100644
--- a/src/hotspot/cpu/x86/stubRoutines_x86.cpp
+++ b/src/hotspot/cpu/x86/stubRoutines_x86.cpp
@@ -59,6 +59,7 @@ address StubRoutines::x86::_vector_double_sign_flip = NULL;
address StubRoutines::x86::_vector_byte_perm_mask = NULL;
address StubRoutines::x86::_vector_long_sign_mask = NULL;
address StubRoutines::x86::_vector_iota_indices = NULL;
+address StubRoutines::x86::_vector_popcount_lut = NULL;
address StubRoutines::x86::_vector_32_bit_mask = NULL;
address StubRoutines::x86::_vector_64_bit_mask = NULL;
#ifdef _LP64
diff --git a/src/hotspot/cpu/x86/stubRoutines_x86.hpp b/src/hotspot/cpu/x86/stubRoutines_x86.hpp
index e4dd9550ce28343e24917176eababb5913835137..5119dde4fd5427a866036ba95425c4116805ce15 100644
--- a/src/hotspot/cpu/x86/stubRoutines_x86.hpp
+++ b/src/hotspot/cpu/x86/stubRoutines_x86.hpp
@@ -177,6 +177,7 @@ class x86 {
static address _vector_short_shuffle_mask;
static address _vector_long_shuffle_mask;
static address _vector_iota_indices;
+ static address _vector_popcount_lut;
#ifdef _LP64
static juint _k256_W[];
static address _k256_W_adr;
@@ -340,6 +341,9 @@ class x86 {
return _vector_iota_indices;
}
+ static address vector_popcount_lut() {
+ return _vector_popcount_lut;
+ }
#ifdef _LP64
static address k256_W_addr() { return _k256_W_adr; }
static address k512_W_addr() { return _k512_W_addr; }
diff --git a/src/hotspot/cpu/x86/templateInterpreterGenerator_x86.cpp b/src/hotspot/cpu/x86/templateInterpreterGenerator_x86.cpp
index ca7bcd8e50ec49d707f55f56589f83f37d6a046c..7b14aff6f1f6e2f7c71b8cd18e9576262a47e20c 100644
--- a/src/hotspot/cpu/x86/templateInterpreterGenerator_x86.cpp
+++ b/src/hotspot/cpu/x86/templateInterpreterGenerator_x86.cpp
@@ -388,7 +388,6 @@ address TemplateInterpreterGenerator::generate_safept_entry_for(
void TemplateInterpreterGenerator::generate_counter_incr(Label* overflow) {
Label done;
// Note: In tiered we increment either counters in Method* or in MDO depending if we're profiling or not.
- int increment = InvocationCounter::count_increment;
Label no_mdo;
if (ProfileInterpreter) {
// Are we profiling?
@@ -399,7 +398,7 @@ void TemplateInterpreterGenerator::generate_counter_incr(Label* overflow) {
const Address mdo_invocation_counter(rax, in_bytes(MethodData::invocation_counter_offset()) +
in_bytes(InvocationCounter::counter_offset()));
const Address mask(rax, in_bytes(MethodData::invoke_mask_offset()));
- __ increment_mask_and_jump(mdo_invocation_counter, increment, mask, rcx, false, Assembler::zero, overflow);
+ __ increment_mask_and_jump(mdo_invocation_counter, mask, rcx, overflow);
__ jmp(done);
}
__ bind(no_mdo);
@@ -409,8 +408,7 @@ void TemplateInterpreterGenerator::generate_counter_incr(Label* overflow) {
InvocationCounter::counter_offset());
__ get_method_counters(rbx, rax, done);
const Address mask(rax, in_bytes(MethodCounters::invoke_mask_offset()));
- __ increment_mask_and_jump(invocation_counter, increment, mask, rcx,
- false, Assembler::zero, overflow);
+ __ increment_mask_and_jump(invocation_counter, mask, rcx, overflow);
__ bind(done);
}
@@ -755,8 +753,8 @@ void TemplateInterpreterGenerator::bang_stack_shadow_pages(bool native_call) {
__ bang_stack_with_offset(p*page_size);
}
- // Record a new watermark, unless the update is above the safe limit.
- // Otherwise, the next time around a check above would pass the safe limit.
+ // Record the new watermark, but only if update is above the safe limit.
+ // Otherwise, the next time around the check above would pass the safe limit.
__ cmpptr(rsp, Address(thread, JavaThread::shadow_zone_safe_limit()));
__ jccb(Assembler::belowEqual, L_done);
__ movptr(Address(thread, JavaThread::shadow_zone_growth_watermark()), rsp);
diff --git a/src/hotspot/cpu/x86/templateInterpreterGenerator_x86_64.cpp b/src/hotspot/cpu/x86/templateInterpreterGenerator_x86_64.cpp
index 8702a28ac689fc071bd4bf91acfcdb7d7bf7adb5..1a1dad0cea317543dda0b0604f8e614097c2233f 100644
--- a/src/hotspot/cpu/x86/templateInterpreterGenerator_x86_64.cpp
+++ b/src/hotspot/cpu/x86/templateInterpreterGenerator_x86_64.cpp
@@ -69,7 +69,7 @@ address TemplateInterpreterGenerator::generate_slow_signature_handler() {
Label isfloatordouble, isdouble, next;
__ testl(c_rarg3, 1 << (i*2)); // Float or Double?
- __ jcc(Assembler::notZero, isfloatordouble);
+ __ jccb(Assembler::notZero, isfloatordouble);
// Do Int register here
switch ( i ) {
@@ -88,15 +88,15 @@ address TemplateInterpreterGenerator::generate_slow_signature_handler() {
break;
}
- __ jmp (next);
+ __ jmpb(next);
__ bind(isfloatordouble);
__ testl(c_rarg3, 1 << ((i*2)+1)); // Double?
- __ jcc(Assembler::notZero, isdouble);
+ __ jccb(Assembler::notZero, isdouble);
// Do Float Here
__ movflt(floatreg, Address(rsp, i * wordSize));
- __ jmp(next);
+ __ jmpb(next);
// Do Double here
__ bind(isdouble);
@@ -150,9 +150,9 @@ address TemplateInterpreterGenerator::generate_slow_signature_handler() {
Label d, done;
__ testl(c_rarg3, 1 << i);
- __ jcc(Assembler::notZero, d);
+ __ jccb(Assembler::notZero, d);
__ movflt(r, Address(rsp, (6 + i) * wordSize));
- __ jmp(done);
+ __ jmpb(done);
__ bind(d);
__ movdbl(r, Address(rsp, (6 + i) * wordSize));
__ bind(done);
diff --git a/src/hotspot/cpu/x86/templateTable_x86.cpp b/src/hotspot/cpu/x86/templateTable_x86.cpp
index 0532fb17785c0faffd5a32bccec88edc19705e70..531ff7956b4bc86007977c5a8c4148f88dde12ef 100644
--- a/src/hotspot/cpu/x86/templateTable_x86.cpp
+++ b/src/hotspot/cpu/x86/templateTable_x86.cpp
@@ -152,7 +152,7 @@ static void do_oop_store(InterpreterMacroAssembler* _masm,
Register val,
DecoratorSet decorators = 0) {
assert(val == noreg || val == rax, "parameter is just for looks");
- __ store_heap_oop(dst, val, rdx, rbx, decorators);
+ __ store_heap_oop(dst, val, rdx, rbx, LP64_ONLY(r8) NOT_LP64(rsi), decorators);
}
static void do_oop_load(InterpreterMacroAssembler* _masm,
@@ -1067,7 +1067,7 @@ void TemplateTable::iastore() {
__ access_store_at(T_INT, IN_HEAP | IS_ARRAY,
Address(rdx, rbx, Address::times_4,
arrayOopDesc::base_offset_in_bytes(T_INT)),
- rax, noreg, noreg);
+ rax, noreg, noreg, noreg);
}
void TemplateTable::lastore() {
@@ -1081,7 +1081,7 @@ void TemplateTable::lastore() {
__ access_store_at(T_LONG, IN_HEAP | IS_ARRAY,
Address(rcx, rbx, Address::times_8,
arrayOopDesc::base_offset_in_bytes(T_LONG)),
- noreg /* ltos */, noreg, noreg);
+ noreg /* ltos */, noreg, noreg, noreg);
}
@@ -1095,7 +1095,7 @@ void TemplateTable::fastore() {
__ access_store_at(T_FLOAT, IN_HEAP | IS_ARRAY,
Address(rdx, rbx, Address::times_4,
arrayOopDesc::base_offset_in_bytes(T_FLOAT)),
- noreg /* ftos */, noreg, noreg);
+ noreg /* ftos */, noreg, noreg, noreg);
}
void TemplateTable::dastore() {
@@ -1108,7 +1108,7 @@ void TemplateTable::dastore() {
__ access_store_at(T_DOUBLE, IN_HEAP | IS_ARRAY,
Address(rdx, rbx, Address::times_8,
arrayOopDesc::base_offset_in_bytes(T_DOUBLE)),
- noreg /* dtos */, noreg, noreg);
+ noreg /* dtos */, noreg, noreg, noreg);
}
void TemplateTable::aastore() {
@@ -1186,7 +1186,7 @@ void TemplateTable::bastore() {
__ access_store_at(T_BYTE, IN_HEAP | IS_ARRAY,
Address(rdx, rbx,Address::times_1,
arrayOopDesc::base_offset_in_bytes(T_BYTE)),
- rax, noreg, noreg);
+ rax, noreg, noreg, noreg);
}
void TemplateTable::castore() {
@@ -1199,7 +1199,7 @@ void TemplateTable::castore() {
__ access_store_at(T_CHAR, IN_HEAP | IS_ARRAY,
Address(rdx, rbx, Address::times_2,
arrayOopDesc::base_offset_in_bytes(T_CHAR)),
- rax, noreg, noreg);
+ rax, noreg, noreg, noreg);
}
@@ -2197,7 +2197,6 @@ void TemplateTable::branch(bool is_jsr, bool is_wide) {
__ bind(has_counters);
Label no_mdo;
- int increment = InvocationCounter::count_increment;
if (ProfileInterpreter) {
// Are we profiling?
__ movptr(rbx, Address(rcx, in_bytes(Method::method_data_offset())));
@@ -2207,7 +2206,7 @@ void TemplateTable::branch(bool is_jsr, bool is_wide) {
const Address mdo_backedge_counter(rbx, in_bytes(MethodData::backedge_counter_offset()) +
in_bytes(InvocationCounter::counter_offset()));
const Address mask(rbx, in_bytes(MethodData::backedge_mask_offset()));
- __ increment_mask_and_jump(mdo_backedge_counter, increment, mask, rax, false, Assembler::zero,
+ __ increment_mask_and_jump(mdo_backedge_counter, mask, rax,
UseOnStackReplacement ? &backedge_counter_overflow : NULL);
__ jmp(dispatch);
}
@@ -2215,8 +2214,8 @@ void TemplateTable::branch(bool is_jsr, bool is_wide) {
// Increment backedge counter in MethodCounters*
__ movptr(rcx, Address(rcx, Method::method_counters_offset()));
const Address mask(rcx, in_bytes(MethodCounters::backedge_mask_offset()));
- __ increment_mask_and_jump(Address(rcx, be_offset), increment, mask,
- rax, false, Assembler::zero, UseOnStackReplacement ? &backedge_counter_overflow : NULL);
+ __ increment_mask_and_jump(Address(rcx, be_offset), mask, rax,
+ UseOnStackReplacement ? &backedge_counter_overflow : NULL);
__ bind(dispatch);
}
@@ -3102,7 +3101,7 @@ void TemplateTable::putfield_or_static_helper(int byte_no, bool is_static, Rewri
{
__ pop(btos);
if (!is_static) pop_and_check_object(obj);
- __ access_store_at(T_BYTE, IN_HEAP, field, rax, noreg, noreg);
+ __ access_store_at(T_BYTE, IN_HEAP, field, rax, noreg, noreg, noreg);
if (!is_static && rc == may_rewrite) {
patch_bytecode(Bytecodes::_fast_bputfield, bc, rbx, true, byte_no);
}
@@ -3117,7 +3116,7 @@ void TemplateTable::putfield_or_static_helper(int byte_no, bool is_static, Rewri
{
__ pop(ztos);
if (!is_static) pop_and_check_object(obj);
- __ access_store_at(T_BOOLEAN, IN_HEAP, field, rax, noreg, noreg);
+ __ access_store_at(T_BOOLEAN, IN_HEAP, field, rax, noreg, noreg, noreg);
if (!is_static && rc == may_rewrite) {
patch_bytecode(Bytecodes::_fast_zputfield, bc, rbx, true, byte_no);
}
@@ -3148,7 +3147,7 @@ void TemplateTable::putfield_or_static_helper(int byte_no, bool is_static, Rewri
{
__ pop(itos);
if (!is_static) pop_and_check_object(obj);
- __ access_store_at(T_INT, IN_HEAP, field, rax, noreg, noreg);
+ __ access_store_at(T_INT, IN_HEAP, field, rax, noreg, noreg, noreg);
if (!is_static && rc == may_rewrite) {
patch_bytecode(Bytecodes::_fast_iputfield, bc, rbx, true, byte_no);
}
@@ -3163,7 +3162,7 @@ void TemplateTable::putfield_or_static_helper(int byte_no, bool is_static, Rewri
{
__ pop(ctos);
if (!is_static) pop_and_check_object(obj);
- __ access_store_at(T_CHAR, IN_HEAP, field, rax, noreg, noreg);
+ __ access_store_at(T_CHAR, IN_HEAP, field, rax, noreg, noreg, noreg);
if (!is_static && rc == may_rewrite) {
patch_bytecode(Bytecodes::_fast_cputfield, bc, rbx, true, byte_no);
}
@@ -3178,7 +3177,7 @@ void TemplateTable::putfield_or_static_helper(int byte_no, bool is_static, Rewri
{
__ pop(stos);
if (!is_static) pop_and_check_object(obj);
- __ access_store_at(T_SHORT, IN_HEAP, field, rax, noreg, noreg);
+ __ access_store_at(T_SHORT, IN_HEAP, field, rax, noreg, noreg, noreg);
if (!is_static && rc == may_rewrite) {
patch_bytecode(Bytecodes::_fast_sputfield, bc, rbx, true, byte_no);
}
@@ -3194,7 +3193,7 @@ void TemplateTable::putfield_or_static_helper(int byte_no, bool is_static, Rewri
__ pop(ltos);
if (!is_static) pop_and_check_object(obj);
// MO_RELAXED: generate atomic store for the case of volatile field (important for x86_32)
- __ access_store_at(T_LONG, IN_HEAP | MO_RELAXED, field, noreg /* ltos*/, noreg, noreg);
+ __ access_store_at(T_LONG, IN_HEAP | MO_RELAXED, field, noreg /* ltos*/, noreg, noreg, noreg);
#ifdef _LP64
if (!is_static && rc == may_rewrite) {
patch_bytecode(Bytecodes::_fast_lputfield, bc, rbx, true, byte_no);
@@ -3211,7 +3210,7 @@ void TemplateTable::putfield_or_static_helper(int byte_no, bool is_static, Rewri
{
__ pop(ftos);
if (!is_static) pop_and_check_object(obj);
- __ access_store_at(T_FLOAT, IN_HEAP, field, noreg /* ftos */, noreg, noreg);
+ __ access_store_at(T_FLOAT, IN_HEAP, field, noreg /* ftos */, noreg, noreg, noreg);
if (!is_static && rc == may_rewrite) {
patch_bytecode(Bytecodes::_fast_fputfield, bc, rbx, true, byte_no);
}
@@ -3230,7 +3229,7 @@ void TemplateTable::putfield_or_static_helper(int byte_no, bool is_static, Rewri
__ pop(dtos);
if (!is_static) pop_and_check_object(obj);
// MO_RELAXED: for the case of volatile field, in fact it adds no extra work for the underlying implementation
- __ access_store_at(T_DOUBLE, IN_HEAP | MO_RELAXED, field, noreg /* dtos */, noreg, noreg);
+ __ access_store_at(T_DOUBLE, IN_HEAP | MO_RELAXED, field, noreg /* dtos */, noreg, noreg, noreg);
if (!is_static && rc == may_rewrite) {
patch_bytecode(Bytecodes::_fast_dputfield, bc, rbx, true, byte_no);
}
@@ -3373,31 +3372,31 @@ void TemplateTable::fast_storefield_helper(Address field, Register rax) {
break;
case Bytecodes::_fast_lputfield:
#ifdef _LP64
- __ access_store_at(T_LONG, IN_HEAP, field, noreg /* ltos */, noreg, noreg);
+ __ access_store_at(T_LONG, IN_HEAP, field, noreg /* ltos */, noreg, noreg, noreg);
#else
__ stop("should not be rewritten");
#endif
break;
case Bytecodes::_fast_iputfield:
- __ access_store_at(T_INT, IN_HEAP, field, rax, noreg, noreg);
+ __ access_store_at(T_INT, IN_HEAP, field, rax, noreg, noreg, noreg);
break;
case Bytecodes::_fast_zputfield:
- __ access_store_at(T_BOOLEAN, IN_HEAP, field, rax, noreg, noreg);
+ __ access_store_at(T_BOOLEAN, IN_HEAP, field, rax, noreg, noreg, noreg);
break;
case Bytecodes::_fast_bputfield:
- __ access_store_at(T_BYTE, IN_HEAP, field, rax, noreg, noreg);
+ __ access_store_at(T_BYTE, IN_HEAP, field, rax, noreg, noreg, noreg);
break;
case Bytecodes::_fast_sputfield:
- __ access_store_at(T_SHORT, IN_HEAP, field, rax, noreg, noreg);
+ __ access_store_at(T_SHORT, IN_HEAP, field, rax, noreg, noreg, noreg);
break;
case Bytecodes::_fast_cputfield:
- __ access_store_at(T_CHAR, IN_HEAP, field, rax, noreg, noreg);
+ __ access_store_at(T_CHAR, IN_HEAP, field, rax, noreg, noreg, noreg);
break;
case Bytecodes::_fast_fputfield:
- __ access_store_at(T_FLOAT, IN_HEAP, field, noreg /* ftos*/, noreg, noreg);
+ __ access_store_at(T_FLOAT, IN_HEAP, field, noreg /* ftos*/, noreg, noreg, noreg);
break;
case Bytecodes::_fast_dputfield:
- __ access_store_at(T_DOUBLE, IN_HEAP, field, noreg /* dtos*/, noreg, noreg);
+ __ access_store_at(T_DOUBLE, IN_HEAP, field, noreg /* dtos*/, noreg, noreg, noreg);
break;
default:
ShouldNotReachHere();
diff --git a/src/hotspot/cpu/x86/vm_version_x86.hpp b/src/hotspot/cpu/x86/vm_version_x86.hpp
index 2fd1bbc9617002e6c84cb8f5e76fe090ef1b438c..2f4e31b4708ec8e9c777584dd5e1e5ea65111133 100644
--- a/src/hotspot/cpu/x86/vm_version_x86.hpp
+++ b/src/hotspot/cpu/x86/vm_version_x86.hpp
@@ -1044,6 +1044,25 @@ public:
static bool supports_clflushopt() { return ((_features & CPU_FLUSHOPT) != 0); }
static bool supports_clwb() { return ((_features & CPU_CLWB) != 0); }
+ // Old CPUs perform lea on AGU which causes additional latency transfering the
+ // value from/to ALU for other operations
+ static bool supports_fast_2op_lea() {
+ return (is_intel() && supports_avx()) || // Sandy Bridge and above
+ (is_amd() && supports_avx()); // Jaguar and Bulldozer and above
+ }
+
+ // Pre Icelake Intels suffer inefficiency regarding 3-operand lea, which contains
+ // all of base register, index register and displacement immediate, with 3 latency.
+ // Note that when the address contains no displacement but the base register is
+ // rbp or r13, the machine code must contain a zero displacement immediate,
+ // effectively transform a 2-operand lea into a 3-operand lea. This can be
+ // replaced by add-add or lea-add
+ static bool supports_fast_3op_lea() {
+ return supports_fast_2op_lea() &&
+ ((is_intel() && supports_clwb() && !is_intel_skylake()) || // Icelake and above
+ is_amd());
+ }
+
#ifdef __APPLE__
// Is the CPU running emulated (for example macOS Rosetta running x86_64 code on M1 ARM (aarch64)
static bool is_cpu_emulated();
diff --git a/src/hotspot/cpu/x86/x86.ad b/src/hotspot/cpu/x86/x86.ad
index 7ff67e9a085562722946e6c04f95cb18e4136fb6..ab28ebd5ca5e1da187a41fd625a5a5350aefea5b 100644
--- a/src/hotspot/cpu/x86/x86.ad
+++ b/src/hotspot/cpu/x86/x86.ad
@@ -1405,8 +1405,12 @@ const bool Matcher::match_rule_supported(int opcode) {
}
break;
case Op_PopCountVI:
+ if (!UsePopCountInstruction || (UseAVX < 2)) {
+ return false;
+ }
+ break;
case Op_PopCountVL:
- if (!UsePopCountInstruction || !VM_Version::supports_avx512_vpopcntdq()) {
+ if (!UsePopCountInstruction || (UseAVX <= 2)) {
return false;
}
break;
@@ -1861,6 +1865,18 @@ const bool Matcher::match_rule_supported_vector(int opcode, int vlen, BasicType
return false;
}
break;
+ case Op_PopCountVI:
+ if (!VM_Version::supports_avx512_vpopcntdq() &&
+ (vlen == 16) && !VM_Version::supports_avx512bw()) {
+ return false;
+ }
+ break;
+ case Op_PopCountVL:
+ if (!VM_Version::supports_avx512_vpopcntdq() &&
+ ((vlen <= 4) || ((vlen == 8) && !VM_Version::supports_avx512bw()))) {
+ return false;
+ }
+ break;
}
return true; // Per default match rules are supported.
}
@@ -8571,28 +8587,54 @@ instruct vmuladdaddS2I_reg(vec dst, vec src1, vec src2) %{
// --------------------------------- PopCount --------------------------------------
-instruct vpopcountI(vec dst, vec src) %{
+instruct vpopcountI_popcntd(vec dst, vec src) %{
+ predicate(VM_Version::supports_avx512_vpopcntdq());
match(Set dst (PopCountVI src));
- format %{ "vpopcntd $dst,$src\t! vector popcount packedI" %}
+ format %{ "vector_popcount_int $dst, $src\t! vector popcount packedI" %}
ins_encode %{
assert(UsePopCountInstruction, "not enabled");
+ int vlen_enc = vector_length_encoding(this);
+ __ vector_popcount_int($dst$$XMMRegister, $src$$XMMRegister, xnoreg, xnoreg, xnoreg, noreg, vlen_enc);
+ %}
+ ins_pipe( pipe_slow );
+%}
+instruct vpopcountI(vec dst, vec src, vec xtmp1, vec xtmp2, vec xtmp3, rRegP rtmp, rFlagsReg cc) %{
+ predicate(!VM_Version::supports_avx512_vpopcntdq());
+ match(Set dst (PopCountVI src));
+ effect(TEMP dst, TEMP xtmp1, TEMP xtmp2, TEMP xtmp3, TEMP rtmp, KILL cc);
+ format %{ "vector_popcount_int $dst, $src\t! using $xtmp1, $xtmp2, $xtmp3, and $rtmp as TEMP" %}
+ ins_encode %{
+ assert(UsePopCountInstruction, "not enabled");
int vlen_enc = vector_length_encoding(this);
- __ vpopcntd($dst$$XMMRegister, $src$$XMMRegister, vlen_enc);
+ __ vector_popcount_int($dst$$XMMRegister, $src$$XMMRegister, $xtmp1$$XMMRegister, $xtmp2$$XMMRegister,
+ $xtmp3$$XMMRegister, $rtmp$$Register, vlen_enc);
%}
ins_pipe( pipe_slow );
%}
-instruct vpopcountL(vec dst, vec src) %{
+instruct vpopcountL_popcntd(vec dst, vec src) %{
+ predicate(VM_Version::supports_avx512_vpopcntdq());
match(Set dst (PopCountVL src));
- format %{ "vpopcntq $dst,$src\t! vector popcount packedL" %}
+ format %{ "vector_popcount_long $dst, $src\t! vector popcount packedL" %}
ins_encode %{
assert(UsePopCountInstruction, "not enabled");
-
int vlen_enc = vector_length_encoding(this, $src);
- __ vpopcntq($dst$$XMMRegister, $src$$XMMRegister, vlen_enc);
- __ evpmovqd($dst$$XMMRegister, $dst$$XMMRegister, vlen_enc);
+ __ vector_popcount_long($dst$$XMMRegister, $src$$XMMRegister, xnoreg, xnoreg, xnoreg, noreg, vlen_enc);
+ %}
+ ins_pipe( pipe_slow );
+%}
+instruct vpopcountL(vec dst, vec src, vec xtmp1, vec xtmp2, vec xtmp3, rRegP rtmp, rFlagsReg cc) %{
+ predicate(!VM_Version::supports_avx512_vpopcntdq());
+ match(Set dst (PopCountVL src));
+ effect(TEMP dst, TEMP xtmp1, TEMP xtmp2, TEMP xtmp3, TEMP rtmp, KILL cc);
+ format %{ "vector_popcount_long $dst, $src\t! using $xtmp1, $xtmp2, $xtmp3, and $rtmp as TEMP" %}
+ ins_encode %{
+ assert(UsePopCountInstruction, "not enabled");
+ int vlen_enc = vector_length_encoding(this, $src);
+ __ vector_popcount_long($dst$$XMMRegister, $src$$XMMRegister, $xtmp1$$XMMRegister, $xtmp2$$XMMRegister,
+ $xtmp3$$XMMRegister, $rtmp$$Register, vlen_enc);
%}
ins_pipe( pipe_slow );
%}
diff --git a/src/hotspot/cpu/x86/x86_32.ad b/src/hotspot/cpu/x86/x86_32.ad
index a31a38a384fe54e6b2964dafcea7729eea487928..9bba150516ed134ae16e34b52cf97242f2ef22e7 100644
--- a/src/hotspot/cpu/x86/x86_32.ad
+++ b/src/hotspot/cpu/x86/x86_32.ad
@@ -1,5 +1,5 @@
//
-// Copyright (c) 1997, 2021, Oracle and/or its affiliates. All rights reserved.
+// Copyright (c) 1997, 2022, Oracle and/or its affiliates. All rights reserved.
// DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
//
// This code is free software; you can redistribute it and/or modify it
@@ -7825,9 +7825,9 @@ instruct divI_eReg(eAXRegI rax, eDXRegI rdx, eCXRegI div, eFlagsReg cr) %{
%}
// Divide Register Long
-instruct divL_eReg( eADXRegL dst, eRegL src1, eRegL src2, eFlagsReg cr, eCXRegI cx, eBXRegI bx ) %{
+instruct divL_eReg(eADXRegL dst, eRegL src1, eRegL src2) %{
match(Set dst (DivL src1 src2));
- effect( KILL cr, KILL cx, KILL bx );
+ effect(CALL);
ins_cost(10000);
format %{ "PUSH $src1.hi\n\t"
"PUSH $src1.lo\n\t"
@@ -7873,9 +7873,9 @@ instruct modI_eReg(eDXRegI rdx, eAXRegI rax, eCXRegI div, eFlagsReg cr) %{
%}
// Remainder Register Long
-instruct modL_eReg( eADXRegL dst, eRegL src1, eRegL src2, eFlagsReg cr, eCXRegI cx, eBXRegI bx ) %{
+instruct modL_eReg(eADXRegL dst, eRegL src1, eRegL src2) %{
match(Set dst (ModL src1 src2));
- effect( KILL cr, KILL cx, KILL bx );
+ effect(CALL);
ins_cost(10000);
format %{ "PUSH $src1.hi\n\t"
"PUSH $src1.lo\n\t"
@@ -12122,34 +12122,34 @@ instruct array_equalsC_evex(eDIRegP ary1, eSIRegP ary2, eAXRegI result,
ins_pipe( pipe_slow );
%}
-instruct has_negatives(eSIRegP ary1, eCXRegI len, eAXRegI result,
- regD tmp1, regD tmp2, eBXRegI tmp3, eFlagsReg cr)
+instruct count_positives(eSIRegP ary1, eCXRegI len, eAXRegI result,
+ regD tmp1, regD tmp2, eBXRegI tmp3, eFlagsReg cr)
%{
predicate(!VM_Version::supports_avx512vlbw() || !VM_Version::supports_bmi2());
- match(Set result (HasNegatives ary1 len));
+ match(Set result (CountPositives ary1 len));
effect(TEMP tmp1, TEMP tmp2, USE_KILL ary1, USE_KILL len, KILL tmp3, KILL cr);
- format %{ "has negatives byte[] $ary1,$len -> $result // KILL $tmp1, $tmp2, $tmp3" %}
+ format %{ "countPositives byte[] $ary1,$len -> $result // KILL $tmp1, $tmp2, $tmp3" %}
ins_encode %{
- __ has_negatives($ary1$$Register, $len$$Register,
- $result$$Register, $tmp3$$Register,
- $tmp1$$XMMRegister, $tmp2$$XMMRegister, knoreg, knoreg);
+ __ count_positives($ary1$$Register, $len$$Register,
+ $result$$Register, $tmp3$$Register,
+ $tmp1$$XMMRegister, $tmp2$$XMMRegister, knoreg, knoreg);
%}
ins_pipe( pipe_slow );
%}
-instruct has_negatives_evex(eSIRegP ary1, eCXRegI len, eAXRegI result,
- regD tmp1, regD tmp2, kReg ktmp1, kReg ktmp2, eBXRegI tmp3, eFlagsReg cr)
+instruct count_positives_evex(eSIRegP ary1, eCXRegI len, eAXRegI result,
+ regD tmp1, regD tmp2, kReg ktmp1, kReg ktmp2, eBXRegI tmp3, eFlagsReg cr)
%{
predicate(VM_Version::supports_avx512vlbw() && VM_Version::supports_bmi2());
- match(Set result (HasNegatives ary1 len));
+ match(Set result (CountPositives ary1 len));
effect(TEMP tmp1, TEMP tmp2, TEMP ktmp1, TEMP ktmp2, USE_KILL ary1, USE_KILL len, KILL tmp3, KILL cr);
- format %{ "has negatives byte[] $ary1,$len -> $result // KILL $tmp1, $tmp2, $tmp3" %}
+ format %{ "countPositives byte[] $ary1,$len -> $result // KILL $tmp1, $tmp2, $tmp3" %}
ins_encode %{
- __ has_negatives($ary1$$Register, $len$$Register,
- $result$$Register, $tmp3$$Register,
- $tmp1$$XMMRegister, $tmp2$$XMMRegister, $ktmp1$$KRegister, $ktmp2$$KRegister);
+ __ count_positives($ary1$$Register, $len$$Register,
+ $result$$Register, $tmp3$$Register,
+ $tmp1$$XMMRegister, $tmp2$$XMMRegister, $ktmp1$$KRegister, $ktmp2$$KRegister);
%}
ins_pipe( pipe_slow );
%}
diff --git a/src/hotspot/cpu/x86/x86_64.ad b/src/hotspot/cpu/x86/x86_64.ad
index fbf71300dcd6b6da8a0dc4b034f0c152dfdcdaae..09ff707599472eb57dbf77fa9fe21dc12d744b6b 100644
--- a/src/hotspot/cpu/x86/x86_64.ad
+++ b/src/hotspot/cpu/x86/x86_64.ad
@@ -1,5 +1,5 @@
//
-// Copyright (c) 2003, 2021, Oracle and/or its affiliates. All rights reserved.
+// Copyright (c) 2003, 2022, Oracle and/or its affiliates. All rights reserved.
// DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
//
// This code is free software; you can redistribute it and/or modify it
@@ -241,6 +241,11 @@ reg_class long_no_rcx_reg %{
return _LONG_NO_RCX_REG_mask;
%}
+// Class for all long registers (excluding RBP and R13)
+reg_class long_no_rbp_r13_reg %{
+ return _LONG_NO_RBP_R13_REG_mask;
+%}
+
// Class for all int registers (excluding RSP)
reg_class int_reg %{
return _INT_REG_mask;
@@ -256,6 +261,11 @@ reg_class int_no_rcx_reg %{
return _INT_NO_RCX_REG_mask;
%}
+// Class for all int registers (excluding RBP and R13)
+reg_class int_no_rbp_r13_reg %{
+ return _INT_NO_RBP_R13_REG_mask;
+%}
+
// Singleton class for RAX pointer register
reg_class ptr_rax_reg(RAX, RAX_H);
@@ -319,9 +329,11 @@ extern RegMask _PTR_NO_RAX_RBX_REG_mask;
extern RegMask _LONG_REG_mask;
extern RegMask _LONG_NO_RAX_RDX_REG_mask;
extern RegMask _LONG_NO_RCX_REG_mask;
+extern RegMask _LONG_NO_RBP_R13_REG_mask;
extern RegMask _INT_REG_mask;
extern RegMask _INT_NO_RAX_RDX_REG_mask;
extern RegMask _INT_NO_RCX_REG_mask;
+extern RegMask _INT_NO_RBP_R13_REG_mask;
extern RegMask _FLOAT_REG_mask;
extern RegMask _STACK_OR_PTR_REG_mask;
@@ -348,9 +360,11 @@ RegMask _PTR_NO_RAX_RBX_REG_mask;
RegMask _LONG_REG_mask;
RegMask _LONG_NO_RAX_RDX_REG_mask;
RegMask _LONG_NO_RCX_REG_mask;
+RegMask _LONG_NO_RBP_R13_REG_mask;
RegMask _INT_REG_mask;
RegMask _INT_NO_RAX_RDX_REG_mask;
RegMask _INT_NO_RCX_REG_mask;
+RegMask _INT_NO_RBP_R13_REG_mask;
RegMask _FLOAT_REG_mask;
RegMask _STACK_OR_PTR_REG_mask;
RegMask _STACK_OR_LONG_REG_mask;
@@ -409,6 +423,12 @@ void reg_mask_init() {
_LONG_NO_RCX_REG_mask.Remove(OptoReg::as_OptoReg(rcx->as_VMReg()));
_LONG_NO_RCX_REG_mask.Remove(OptoReg::as_OptoReg(rcx->as_VMReg()->next()));
+ _LONG_NO_RBP_R13_REG_mask = _LONG_REG_mask;
+ _LONG_NO_RBP_R13_REG_mask.Remove(OptoReg::as_OptoReg(rbp->as_VMReg()));
+ _LONG_NO_RBP_R13_REG_mask.Remove(OptoReg::as_OptoReg(rbp->as_VMReg()->next()));
+ _LONG_NO_RBP_R13_REG_mask.Remove(OptoReg::as_OptoReg(r13->as_VMReg()));
+ _LONG_NO_RBP_R13_REG_mask.Remove(OptoReg::as_OptoReg(r13->as_VMReg()->next()));
+
_INT_REG_mask = _ALL_INT_REG_mask;
if (PreserveFramePointer) {
_INT_REG_mask.Remove(OptoReg::as_OptoReg(rbp->as_VMReg()));
@@ -427,6 +447,10 @@ void reg_mask_init() {
_INT_NO_RCX_REG_mask = _INT_REG_mask;
_INT_NO_RCX_REG_mask.Remove(OptoReg::as_OptoReg(rcx->as_VMReg()));
+ _INT_NO_RBP_R13_REG_mask = _INT_REG_mask;
+ _INT_NO_RBP_R13_REG_mask.Remove(OptoReg::as_OptoReg(rbp->as_VMReg()));
+ _INT_NO_RBP_R13_REG_mask.Remove(OptoReg::as_OptoReg(r13->as_VMReg()));
+
// _FLOAT_REG_LEGACY_mask/_FLOAT_REG_EVEX_mask is generated by adlc
// from the float_reg_legacy/float_reg_evex register class.
_FLOAT_REG_mask = VM_Version::supports_evex() ? _FLOAT_REG_EVEX_mask : _FLOAT_REG_LEGACY_mask;
@@ -1926,7 +1950,7 @@ encode %{
Label done;
// cmp $0x80000000,%eax
- __ cmp(as_Register(RAX_enc), 0x80000000);
+ __ cmpl(as_Register(RAX_enc), 0x80000000);
// jne e
__ jccb(Assembler::notEqual, normal);
@@ -3491,6 +3515,21 @@ operand no_rax_rdx_RegI()
interface(REG_INTER);
%}
+operand no_rbp_r13_RegI()
+%{
+ constraint(ALLOC_IN_RC(int_no_rbp_r13_reg));
+ match(RegI);
+ match(rRegI);
+ match(rax_RegI);
+ match(rbx_RegI);
+ match(rcx_RegI);
+ match(rdx_RegI);
+ match(rdi_RegI);
+
+ format %{ %}
+ interface(REG_INTER);
+%}
+
// Pointer Register
operand any_RegP()
%{
@@ -3718,6 +3757,19 @@ operand rdx_RegL()
interface(REG_INTER);
%}
+operand no_rbp_r13_RegL()
+%{
+ constraint(ALLOC_IN_RC(long_no_rbp_r13_reg));
+ match(RegL);
+ match(rRegL);
+ match(rax_RegL);
+ match(rcx_RegL);
+ match(rdx_RegL);
+
+ format %{ %}
+ interface(REG_INTER);
+%}
+
// Flags register, used as output of compare instructions
operand rFlagsReg()
%{
@@ -7443,14 +7495,53 @@ instruct decI_mem(memory dst, immI_M1 src, rFlagsReg cr)
ins_pipe(ialu_mem_imm);
%}
-instruct leaI_rReg_immI(rRegI dst, rRegI src0, immI src1)
+instruct leaI_rReg_immI2_immI(rRegI dst, rRegI index, immI2 scale, immI disp)
%{
- match(Set dst (AddI src0 src1));
+ predicate(VM_Version::supports_fast_2op_lea());
+ match(Set dst (AddI (LShiftI index scale) disp));
- ins_cost(110);
- format %{ "addr32 leal $dst, [$src0 + $src1]\t# int" %}
+ format %{ "leal $dst, [$index << $scale + $disp]\t# int" %}
+ ins_encode %{
+ Address::ScaleFactor scale = static_cast($scale$$constant);
+ __ leal($dst$$Register, Address(noreg, $index$$Register, scale, $disp$$constant));
+ %}
+ ins_pipe(ialu_reg_reg);
+%}
+
+instruct leaI_rReg_rReg_immI(rRegI dst, rRegI base, rRegI index, immI disp)
+%{
+ predicate(VM_Version::supports_fast_3op_lea());
+ match(Set dst (AddI (AddI base index) disp));
+
+ format %{ "leal $dst, [$base + $index + $disp]\t# int" %}
ins_encode %{
- __ leal($dst$$Register, Address($src0$$Register, $src1$$constant));
+ __ leal($dst$$Register, Address($base$$Register, $index$$Register, Address::times_1, $disp$$constant));
+ %}
+ ins_pipe(ialu_reg_reg);
+%}
+
+instruct leaI_rReg_rReg_immI2(rRegI dst, no_rbp_r13_RegI base, rRegI index, immI2 scale)
+%{
+ predicate(VM_Version::supports_fast_2op_lea());
+ match(Set dst (AddI base (LShiftI index scale)));
+
+ format %{ "leal $dst, [$base + $index << $scale]\t# int" %}
+ ins_encode %{
+ Address::ScaleFactor scale = static_cast($scale$$constant);
+ __ leal($dst$$Register, Address($base$$Register, $index$$Register, scale));
+ %}
+ ins_pipe(ialu_reg_reg);
+%}
+
+instruct leaI_rReg_rReg_immI2_immI(rRegI dst, rRegI base, rRegI index, immI2 scale, immI disp)
+%{
+ predicate(VM_Version::supports_fast_3op_lea());
+ match(Set dst (AddI (AddI base (LShiftI index scale)) disp));
+
+ format %{ "leal $dst, [$base + $index << $scale + $disp]\t# int" %}
+ ins_encode %{
+ Address::ScaleFactor scale = static_cast($scale$$constant);
+ __ leal($dst$$Register, Address($base$$Register, $index$$Register, scale, $disp$$constant));
%}
ins_pipe(ialu_reg_reg);
%}
@@ -7574,14 +7665,53 @@ instruct decL_mem(memory dst, immL_M1 src, rFlagsReg cr)
ins_pipe(ialu_mem_imm);
%}
-instruct leaL_rReg_immL(rRegL dst, rRegL src0, immL32 src1)
+instruct leaL_rReg_immI2_immL32(rRegL dst, rRegL index, immI2 scale, immL32 disp)
%{
- match(Set dst (AddL src0 src1));
+ predicate(VM_Version::supports_fast_2op_lea());
+ match(Set dst (AddL (LShiftL index scale) disp));
- ins_cost(110);
- format %{ "leaq $dst, [$src0 + $src1]\t# long" %}
+ format %{ "leaq $dst, [$index << $scale + $disp]\t# long" %}
+ ins_encode %{
+ Address::ScaleFactor scale = static_cast($scale$$constant);
+ __ leaq($dst$$Register, Address(noreg, $index$$Register, scale, $disp$$constant));
+ %}
+ ins_pipe(ialu_reg_reg);
+%}
+
+instruct leaL_rReg_rReg_immL32(rRegL dst, rRegL base, rRegL index, immL32 disp)
+%{
+ predicate(VM_Version::supports_fast_3op_lea());
+ match(Set dst (AddL (AddL base index) disp));
+
+ format %{ "leaq $dst, [$base + $index + $disp]\t# long" %}
+ ins_encode %{
+ __ leaq($dst$$Register, Address($base$$Register, $index$$Register, Address::times_1, $disp$$constant));
+ %}
+ ins_pipe(ialu_reg_reg);
+%}
+
+instruct leaL_rReg_rReg_immI2(rRegL dst, no_rbp_r13_RegL base, rRegL index, immI2 scale)
+%{
+ predicate(VM_Version::supports_fast_2op_lea());
+ match(Set dst (AddL base (LShiftL index scale)));
+
+ format %{ "leaq $dst, [$base + $index << $scale]\t# long" %}
+ ins_encode %{
+ Address::ScaleFactor scale = static_cast($scale$$constant);
+ __ leaq($dst$$Register, Address($base$$Register, $index$$Register, scale));
+ %}
+ ins_pipe(ialu_reg_reg);
+%}
+
+instruct leaL_rReg_rReg_immI2_immL32(rRegL dst, rRegL base, rRegL index, immI2 scale, immL32 disp)
+%{
+ predicate(VM_Version::supports_fast_3op_lea());
+ match(Set dst (AddL (AddL base (LShiftL index scale)) disp));
+
+ format %{ "leaq $dst, [$base + $index << $scale + $disp]\t# long" %}
ins_encode %{
- __ leaq($dst$$Register, Address($src0$$Register, $src1$$constant));
+ Address::ScaleFactor scale = static_cast($scale$$constant);
+ __ leaq($dst$$Register, Address($base$$Register, $index$$Register, scale, $disp$$constant));
%}
ins_pipe(ialu_reg_reg);
%}
@@ -7612,18 +7742,6 @@ instruct addP_rReg_imm(rRegP dst, immL32 src, rFlagsReg cr)
// XXX addP mem ops ????
-instruct leaP_rReg_imm(rRegP dst, rRegP src0, immL32 src1)
-%{
- match(Set dst (AddP src0 src1));
-
- ins_cost(110);
- format %{ "leaq $dst, [$src0 + $src1]\t# ptr" %}
- ins_encode %{
- __ leaq($dst$$Register, Address($src0$$Register, $src1$$constant));
- %}
- ins_pipe(ialu_reg_reg);
-%}
-
instruct checkCastPP(rRegP dst)
%{
match(Set dst (CheckCastPP dst));
@@ -11685,34 +11803,34 @@ instruct array_equalsC_evex(rdi_RegP ary1, rsi_RegP ary2, rax_RegI result,
ins_pipe( pipe_slow );
%}
-instruct has_negatives(rsi_RegP ary1, rcx_RegI len, rax_RegI result,
- legRegD tmp1, legRegD tmp2, rbx_RegI tmp3, rFlagsReg cr,)
+instruct count_positives(rsi_RegP ary1, rcx_RegI len, rax_RegI result,
+ legRegD tmp1, legRegD tmp2, rbx_RegI tmp3, rFlagsReg cr,)
%{
predicate(!VM_Version::supports_avx512vlbw() || !VM_Version::supports_bmi2());
- match(Set result (HasNegatives ary1 len));
+ match(Set result (CountPositives ary1 len));
effect(TEMP tmp1, TEMP tmp2, USE_KILL ary1, USE_KILL len, KILL tmp3, KILL cr);
- format %{ "has negatives byte[] $ary1,$len -> $result // KILL $tmp1, $tmp2, $tmp3" %}
+ format %{ "countPositives byte[] $ary1,$len -> $result // KILL $tmp1, $tmp2, $tmp3" %}
ins_encode %{
- __ has_negatives($ary1$$Register, $len$$Register,
- $result$$Register, $tmp3$$Register,
- $tmp1$$XMMRegister, $tmp2$$XMMRegister, knoreg, knoreg);
+ __ count_positives($ary1$$Register, $len$$Register,
+ $result$$Register, $tmp3$$Register,
+ $tmp1$$XMMRegister, $tmp2$$XMMRegister, knoreg, knoreg);
%}
ins_pipe( pipe_slow );
%}
-instruct has_negatives_evex(rsi_RegP ary1, rcx_RegI len, rax_RegI result,
- legRegD tmp1, legRegD tmp2, kReg ktmp1, kReg ktmp2, rbx_RegI tmp3, rFlagsReg cr,)
+instruct count_positives_evex(rsi_RegP ary1, rcx_RegI len, rax_RegI result,
+ legRegD tmp1, legRegD tmp2, kReg ktmp1, kReg ktmp2, rbx_RegI tmp3, rFlagsReg cr,)
%{
predicate(VM_Version::supports_avx512vlbw() && VM_Version::supports_bmi2());
- match(Set result (HasNegatives ary1 len));
+ match(Set result (CountPositives ary1 len));
effect(TEMP tmp1, TEMP tmp2, TEMP ktmp1, TEMP ktmp2, USE_KILL ary1, USE_KILL len, KILL tmp3, KILL cr);
- format %{ "has negatives byte[] $ary1,$len -> $result // KILL $tmp1, $tmp2, $tmp3" %}
+ format %{ "countPositives byte[] $ary1,$len -> $result // KILL $tmp1, $tmp2, $tmp3" %}
ins_encode %{
- __ has_negatives($ary1$$Register, $len$$Register,
- $result$$Register, $tmp3$$Register,
- $tmp1$$XMMRegister, $tmp2$$XMMRegister, $ktmp1$$KRegister, $ktmp2$$KRegister);
+ __ count_positives($ary1$$Register, $len$$Register,
+ $result$$Register, $tmp3$$Register,
+ $tmp1$$XMMRegister, $tmp2$$XMMRegister, $ktmp1$$KRegister, $ktmp2$$KRegister);
%}
ins_pipe( pipe_slow );
%}
diff --git a/src/hotspot/cpu/zero/copy_zero.hpp b/src/hotspot/cpu/zero/copy_zero.hpp
index e45e598f74c92494cd8f8ff04763585ec19524c5..1594e861535f850371403be20c9516f3a828e151 100644
--- a/src/hotspot/cpu/zero/copy_zero.hpp
+++ b/src/hotspot/cpu/zero/copy_zero.hpp
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2003, 2019, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2003, 2022, Oracle and/or its affiliates. All rights reserved.
* Copyright 2007 Red Hat, Inc.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
@@ -52,22 +52,7 @@ static void pd_disjoint_words(const HeapWord* from, HeapWord* to, size_t count)
static void pd_disjoint_words_atomic(const HeapWord* from,
HeapWord* to,
size_t count) {
- switch (count) {
- case 8: to[7] = from[7];
- case 7: to[6] = from[6];
- case 6: to[5] = from[5];
- case 5: to[4] = from[4];
- case 4: to[3] = from[3];
- case 3: to[2] = from[2];
- case 2: to[1] = from[1];
- case 1: to[0] = from[0];
- case 0: break;
- default:
- while (count-- > 0) {
- *to++ = *from++;
- }
- break;
- }
+ shared_disjoint_words_atomic(from, to, count);
}
static void pd_aligned_conjoint_words(const HeapWord* from,
diff --git a/src/hotspot/cpu/zero/frame_zero.inline.hpp b/src/hotspot/cpu/zero/frame_zero.inline.hpp
index 396e189a5db4039fbf03ca242a56919e524d4354..dfca0e4bcb11c4f9bbc08d09a6d791f566430ae5 100644
--- a/src/hotspot/cpu/zero/frame_zero.inline.hpp
+++ b/src/hotspot/cpu/zero/frame_zero.inline.hpp
@@ -82,6 +82,11 @@ inline intptr_t* frame::link() const {
return NULL;
}
+inline intptr_t* frame::link_or_null() const {
+ ShouldNotCallThis();
+ return NULL;
+}
+
inline interpreterState frame::get_interpreterState() const {
return zero_interpreterframe()->interpreter_state();
}
diff --git a/src/hotspot/os/aix/attachListener_aix.cpp b/src/hotspot/os/aix/attachListener_aix.cpp
index 25dfe8d816b609cff331213d80d7c892cad32716..461b7fc874f46dc20985019192355b3709aeee5e 100644
--- a/src/hotspot/os/aix/attachListener_aix.cpp
+++ b/src/hotspot/os/aix/attachListener_aix.cpp
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2005, 2021, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2005, 2022, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2012, 2018 SAP SE. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
@@ -28,7 +28,6 @@
#include "runtime/interfaceSupport.inline.hpp"
#include "runtime/os.inline.hpp"
#include "services/attachListener.hpp"
-#include "services/dtraceAttacher.hpp"
#include
#include
diff --git a/src/hotspot/os/bsd/attachListener_bsd.cpp b/src/hotspot/os/bsd/attachListener_bsd.cpp
index 9daad43dc7ad567dd87c9ce44b1363d18c4f5931..b8702c5aa763ef87658db1c1a77b898620d42459 100644
--- a/src/hotspot/os/bsd/attachListener_bsd.cpp
+++ b/src/hotspot/os/bsd/attachListener_bsd.cpp
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2005, 2021, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2005, 2022, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -27,7 +27,6 @@
#include "runtime/interfaceSupport.inline.hpp"
#include "runtime/os.inline.hpp"
#include "services/attachListener.hpp"
-#include "services/dtraceAttacher.hpp"
#include
#include
diff --git a/src/hotspot/os/bsd/threadCritical_bsd.cpp b/src/hotspot/os/bsd/threadCritical_bsd.cpp
deleted file mode 100644
index 71c51df599d7bcf7c6592f9bfa44a2bb1f245fc0..0000000000000000000000000000000000000000
--- a/src/hotspot/os/bsd/threadCritical_bsd.cpp
+++ /dev/null
@@ -1,61 +0,0 @@
-/*
- * Copyright (c) 2001, 2010, Oracle and/or its affiliates. All rights reserved.
- * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
- *
- * This code is free software; you can redistribute it and/or modify it
- * under the terms of the GNU General Public License version 2 only, as
- * published by the Free Software Foundation.
- *
- * This code is distributed in the hope that it will be useful, but WITHOUT
- * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
- * version 2 for more details (a copy is included in the LICENSE file that
- * accompanied this code).
- *
- * You should have received a copy of the GNU General Public License version
- * 2 along with this work; if not, write to the Free Software Foundation,
- * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
- *
- * Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA
- * or visit www.oracle.com if you need additional information or have any
- * questions.
- *
- */
-
-#include "precompiled.hpp"
-#include "runtime/thread.inline.hpp"
-#include "runtime/threadCritical.hpp"
-
-// put OS-includes here
-# include
-
-//
-// See threadCritical.hpp for details of this class.
-//
-
-static pthread_t tc_owner = 0;
-static pthread_mutex_t tc_mutex = PTHREAD_MUTEX_INITIALIZER;
-static int tc_count = 0;
-
-ThreadCritical::ThreadCritical() {
- pthread_t self = pthread_self();
- if (self != tc_owner) {
- int ret = pthread_mutex_lock(&tc_mutex);
- guarantee(ret == 0, "fatal error with pthread_mutex_lock()");
- assert(tc_count == 0, "Lock acquired with illegal reentry count.");
- tc_owner = self;
- }
- tc_count++;
-}
-
-ThreadCritical::~ThreadCritical() {
- assert(tc_owner == pthread_self(), "must have correct owner");
- assert(tc_count > 0, "must have correct count");
-
- tc_count--;
- if (tc_count == 0) {
- tc_owner = 0;
- int ret = pthread_mutex_unlock(&tc_mutex);
- guarantee(ret == 0, "fatal error with pthread_mutex_unlock()");
- }
-}
diff --git a/src/hotspot/os/linux/attachListener_linux.cpp b/src/hotspot/os/linux/attachListener_linux.cpp
index eb723603b5f19e509bc1f0f9e2035805e714b8c9..a6ab0ae8c943d64643a757cf07a0ecf914e61e15 100644
--- a/src/hotspot/os/linux/attachListener_linux.cpp
+++ b/src/hotspot/os/linux/attachListener_linux.cpp
@@ -28,7 +28,6 @@
#include "runtime/interfaceSupport.inline.hpp"
#include "runtime/os.inline.hpp"
#include "services/attachListener.hpp"
-#include "services/dtraceAttacher.hpp"
#include
#include
diff --git a/src/hotspot/os/linux/cgroupSubsystem_linux.cpp b/src/hotspot/os/linux/cgroupSubsystem_linux.cpp
index dd858a30e4cdf3b7354d02d59a2a434ec4a3d97c..1346cf8915f111729f9aa7f99d06bac705fba7da 100644
--- a/src/hotspot/os/linux/cgroupSubsystem_linux.cpp
+++ b/src/hotspot/os/linux/cgroupSubsystem_linux.cpp
@@ -495,7 +495,12 @@ int CgroupSubsystem::active_processor_count() {
cpu_count = limit_count = os::Linux::active_processor_count();
int quota = cpu_quota();
int period = cpu_period();
- int share = cpu_shares();
+
+ // It's not a good idea to use cpu_shares() to limit the number
+ // of CPUs used by the JVM. See JDK-8281181.
+ // UseContainerCpuShares and PreferContainerQuotaForCPUCount are
+ // deprecated and will be removed in the next JDK release.
+ int share = UseContainerCpuShares ? cpu_shares() : -1;
if (quota > -1 && period > 0) {
quota_count = ceilf((float)quota / (float)period);
diff --git a/src/hotspot/os/linux/globals_linux.hpp b/src/hotspot/os/linux/globals_linux.hpp
index 72915b5afbbbe38d25dc624a48685cb09d7fe2f5..2fc4a404b3411244e8acb26c8f767d264fed4d91 100644
--- a/src/hotspot/os/linux/globals_linux.hpp
+++ b/src/hotspot/os/linux/globals_linux.hpp
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2005, 2020, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2005, 2022, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -59,10 +59,14 @@
product(bool, UseContainerSupport, true, \
"Enable detection and runtime container configuration support") \
\
+ product(bool, UseContainerCpuShares, false, \
+ "(Deprecated) Include CPU shares in the CPU availability" \
+ " calculation.") \
+ \
product(bool, PreferContainerQuotaForCPUCount, true, \
- "Calculate the container CPU availability based on the value" \
- " of quotas (if set), when true. Otherwise, use the CPU" \
- " shares value, provided it is less than quota.") \
+ "(Deprecated) Calculate the container CPU availability based" \
+ " on the value of quotas (if set), when true. Otherwise, use" \
+ " the CPU shares value, provided it is less than quota.") \
\
product(bool, AdjustStackSizeForTLS, false, \
"Increase the thread stack size to include space for glibc " \
diff --git a/src/hotspot/os/linux/os_linux.cpp b/src/hotspot/os/linux/os_linux.cpp
index 18b908cfc8fc0eec9e0369b342807920521158de..f2ecca92c82834f5a455e003bd3e429134e75067 100644
--- a/src/hotspot/os/linux/os_linux.cpp
+++ b/src/hotspot/os/linux/os_linux.cpp
@@ -1,5 +1,6 @@
/*
* Copyright (c) 1999, 2022, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2015, 2022 SAP SE. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -2092,6 +2093,34 @@ bool os::Linux::query_process_memory_info(os::Linux::meminfo_t* info) {
return false;
}
+#ifdef __GLIBC__
+// For Glibc, print a one-liner with the malloc tunables.
+// Most important and popular is MALLOC_ARENA_MAX, but we are
+// thorough and print them all.
+static void print_glibc_malloc_tunables(outputStream* st) {
+ static const char* var[] = {
+ // the new variant
+ "GLIBC_TUNABLES",
+ // legacy variants
+ "MALLOC_CHECK_", "MALLOC_TOP_PAD_", "MALLOC_PERTURB_",
+ "MALLOC_MMAP_THRESHOLD_", "MALLOC_TRIM_THRESHOLD_",
+ "MALLOC_MMAP_MAX_", "MALLOC_ARENA_TEST", "MALLOC_ARENA_MAX",
+ NULL};
+ st->print("glibc malloc tunables: ");
+ bool printed = false;
+ for (int i = 0; var[i] != NULL; i ++) {
+ const char* const val = ::getenv(var[i]);
+ if (val != NULL) {
+ st->print("%s%s=%s", (printed ? ", " : ""), var[i], val);
+ printed = true;
+ }
+ }
+ if (!printed) {
+ st->print("(default)");
+ }
+}
+#endif // __GLIBC__
+
void os::Linux::print_process_memory_info(outputStream* st) {
st->print_cr("Process Memory:");
@@ -2114,8 +2143,9 @@ void os::Linux::print_process_memory_info(outputStream* st) {
st->print_cr("Could not open /proc/self/status to get process memory related information");
}
- // Print glibc outstanding allocations.
- // (note: there is no implementation of mallinfo for muslc)
+ // glibc only:
+ // - Print outstanding allocations using mallinfo
+ // - Print glibc tunables
#ifdef __GLIBC__
size_t total_allocated = 0;
bool might_have_wrapped = false;
@@ -2123,9 +2153,10 @@ void os::Linux::print_process_memory_info(outputStream* st) {
struct glibc_mallinfo2 mi = _mallinfo2();
total_allocated = mi.uordblks;
} else if (_mallinfo != NULL) {
- // mallinfo is an old API. Member names mean next to nothing and, beyond that, are int.
- // So values may have wrapped around. Still useful enough to see how much glibc thinks
- // we allocated.
+ // mallinfo is an old API. Member names mean next to nothing and, beyond that, are 32-bit signed.
+ // So for larger footprints the values may have wrapped around. We try to detect this here: if the
+ // process whole resident set size is smaller than 4G, malloc footprint has to be less than that
+ // and the numbers are reliable.
struct glibc_mallinfo mi = _mallinfo();
total_allocated = (size_t)(unsigned)mi.uordblks;
// Since mallinfo members are int, glibc values may have wrapped. Warn about this.
@@ -2136,8 +2167,10 @@ void os::Linux::print_process_memory_info(outputStream* st) {
total_allocated / K,
might_have_wrapped ? " (may have wrapped)" : "");
}
-#endif // __GLIBC__
-
+ // Tunables
+ print_glibc_malloc_tunables(st);
+ st->cr();
+#endif
}
bool os::Linux::print_ld_preload_file(outputStream* st) {
@@ -3939,23 +3972,14 @@ char* os::Linux::reserve_memory_special_shm(size_t bytes, size_t alignment,
return addr;
}
-static void warn_on_commit_special_failure(char* req_addr, size_t bytes,
+static void log_on_commit_special_failure(char* req_addr, size_t bytes,
size_t page_size, int error) {
assert(error == ENOMEM, "Only expect to fail if no memory is available");
- bool warn_on_failure = UseLargePages &&
- (!FLAG_IS_DEFAULT(UseLargePages) ||
- !FLAG_IS_DEFAULT(UseHugeTLBFS) ||
- !FLAG_IS_DEFAULT(LargePageSizeInBytes));
-
- if (warn_on_failure) {
- char msg[128];
- jio_snprintf(msg, sizeof(msg), "Failed to reserve and commit memory. req_addr: "
- PTR_FORMAT " bytes: " SIZE_FORMAT " page size: "
- SIZE_FORMAT " (errno = %d).",
- req_addr, bytes, page_size, error);
- warning("%s", msg);
- }
+ log_info(pagesize)("Failed to reserve and commit memory with given page size. req_addr: " PTR_FORMAT
+ " size: " SIZE_FORMAT "%s, page size: " SIZE_FORMAT "%s, (errno = %d)",
+ p2i(req_addr), byte_size_in_exact_unit(bytes), exact_unit_for_byte_size(bytes),
+ byte_size_in_exact_unit(page_size), exact_unit_for_byte_size(page_size), error);
}
bool os::Linux::commit_memory_special(size_t bytes,
@@ -3977,7 +4001,7 @@ bool os::Linux::commit_memory_special(size_t bytes,
char* addr = (char*)::mmap(req_addr, bytes, prot, flags, -1, 0);
if (addr == MAP_FAILED) {
- warn_on_commit_special_failure(req_addr, bytes, page_size, errno);
+ log_on_commit_special_failure(req_addr, bytes, page_size, errno);
return false;
}
diff --git a/src/hotspot/os/linux/threadCritical_linux.cpp b/src/hotspot/os/linux/threadCritical_linux.cpp
deleted file mode 100644
index 71c51df599d7bcf7c6592f9bfa44a2bb1f245fc0..0000000000000000000000000000000000000000
--- a/src/hotspot/os/linux/threadCritical_linux.cpp
+++ /dev/null
@@ -1,61 +0,0 @@
-/*
- * Copyright (c) 2001, 2010, Oracle and/or its affiliates. All rights reserved.
- * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
- *
- * This code is free software; you can redistribute it and/or modify it
- * under the terms of the GNU General Public License version 2 only, as
- * published by the Free Software Foundation.
- *
- * This code is distributed in the hope that it will be useful, but WITHOUT
- * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
- * version 2 for more details (a copy is included in the LICENSE file that
- * accompanied this code).
- *
- * You should have received a copy of the GNU General Public License version
- * 2 along with this work; if not, write to the Free Software Foundation,
- * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
- *
- * Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA
- * or visit www.oracle.com if you need additional information or have any
- * questions.
- *
- */
-
-#include "precompiled.hpp"
-#include "runtime/thread.inline.hpp"
-#include "runtime/threadCritical.hpp"
-
-// put OS-includes here
-# include
-
-//
-// See threadCritical.hpp for details of this class.
-//
-
-static pthread_t tc_owner = 0;
-static pthread_mutex_t tc_mutex = PTHREAD_MUTEX_INITIALIZER;
-static int tc_count = 0;
-
-ThreadCritical::ThreadCritical() {
- pthread_t self = pthread_self();
- if (self != tc_owner) {
- int ret = pthread_mutex_lock(&tc_mutex);
- guarantee(ret == 0, "fatal error with pthread_mutex_lock()");
- assert(tc_count == 0, "Lock acquired with illegal reentry count.");
- tc_owner = self;
- }
- tc_count++;
-}
-
-ThreadCritical::~ThreadCritical() {
- assert(tc_owner == pthread_self(), "must have correct owner");
- assert(tc_count > 0, "must have correct count");
-
- tc_count--;
- if (tc_count == 0) {
- tc_owner = 0;
- int ret = pthread_mutex_unlock(&tc_mutex);
- guarantee(ret == 0, "fatal error with pthread_mutex_unlock()");
- }
-}
diff --git a/src/hotspot/os/posix/signals_posix.cpp b/src/hotspot/os/posix/signals_posix.cpp
index 895c3cc09ae88c5d7ec04c57e8716062b46a707a..6e94b47712f95f9194a3d4a3a138eec629b9535e 100644
--- a/src/hotspot/os/posix/signals_posix.cpp
+++ b/src/hotspot/os/posix/signals_posix.cpp
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2022, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2020, 2022, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -1556,8 +1556,6 @@ void PosixSignals::hotspot_sigmask(Thread* thread) {
// - Forte Analyzer: AsyncGetCallTrace()
// - StackBanging: get_frame_at_stack_banging_point()
-sigset_t SR_sigset;
-
static void resume_clear_context(OSThread *osthread) {
osthread->set_ucontext(NULL);
osthread->set_siginfo(NULL);
@@ -1673,14 +1671,11 @@ int SR_initialize() {
assert(PosixSignals::SR_signum > SIGSEGV && PosixSignals::SR_signum > SIGBUS,
"SR_signum must be greater than max(SIGSEGV, SIGBUS), see 4355769");
- sigemptyset(&SR_sigset);
- sigaddset(&SR_sigset, PosixSignals::SR_signum);
-
// Set up signal handler for suspend/resume
act.sa_flags = SA_RESTART|SA_SIGINFO;
act.sa_handler = (void (*)(int)) SR_handler;
- // SR_signum is blocked by default.
+ // SR_signum is blocked when the handler runs.
pthread_sigmask(SIG_BLOCK, NULL, &act.sa_mask);
remove_error_signals_from_set(&(act.sa_mask));
diff --git a/src/hotspot/os/aix/threadCritical_aix.cpp b/src/hotspot/os/posix/threadCritical_posix.cpp
similarity index 96%
rename from src/hotspot/os/aix/threadCritical_aix.cpp
rename to src/hotspot/os/posix/threadCritical_posix.cpp
index cd25cb68dc4646e54a6ff89b01afc73377dcfbe6..ee57352cb0cbef007055a2cc25148f3794dba055 100644
--- a/src/hotspot/os/aix/threadCritical_aix.cpp
+++ b/src/hotspot/os/posix/threadCritical_posix.cpp
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2001, 2013, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2001, 2022, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2012, 2014 SAP SE. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
@@ -24,8 +24,8 @@
*/
#include "precompiled.hpp"
-#include "runtime/threadCritical.hpp"
#include "runtime/thread.inline.hpp"
+#include "runtime/threadCritical.hpp"
// put OS-includes here
# include
diff --git a/src/hotspot/os/windows/attachListener_windows.cpp b/src/hotspot/os/windows/attachListener_windows.cpp
index 8b5a2cb7ab4fb8e1bc9f9f209a9e4f8028c6cff1..710afc410051f627ff45807be7454e948e263a8b 100644
--- a/src/hotspot/os/windows/attachListener_windows.cpp
+++ b/src/hotspot/os/windows/attachListener_windows.cpp
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2005, 2021, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2005, 2022, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -27,7 +27,6 @@
#include "runtime/interfaceSupport.inline.hpp"
#include "runtime/os.hpp"
#include "services/attachListener.hpp"
-#include "services/dtraceAttacher.hpp"
#include
#include // SIGBREAK
diff --git a/src/hotspot/os/windows/os_windows.cpp b/src/hotspot/os/windows/os_windows.cpp
index ed4ba3e1fc2ce3074c31fbd2dc7d9e31d8bc5e8c..fd64008ab48e2851eeced4cfa203d0c05ba983d2 100644
--- a/src/hotspot/os/windows/os_windows.cpp
+++ b/src/hotspot/os/windows/os_windows.cpp
@@ -4458,7 +4458,7 @@ static errno_t get_full_path(LPCWSTR unicode_path, LPWSTR* full_path) {
return ERROR_SUCCESS;
}
-static void set_path_prefix(char* buf, LPWSTR* prefix, int* prefix_off, bool* needs_fullpath) {
+static void set_path_prefix(char* buf, LPCWSTR* prefix, int* prefix_off, bool* needs_fullpath) {
*prefix_off = 0;
*needs_fullpath = true;
@@ -4494,7 +4494,7 @@ static wchar_t* wide_abs_unc_path(char const* path, errno_t & err, int additiona
strncpy(buf, path, buf_len);
os::native_path(buf);
- LPWSTR prefix = NULL;
+ LPCWSTR prefix = NULL;
int prefix_off = 0;
bool needs_fullpath = true;
set_path_prefix(buf, &prefix, &prefix_off, &needs_fullpath);
diff --git a/src/hotspot/os_cpu/bsd_aarch64/pauth_bsd_aarch64.inline.hpp b/src/hotspot/os_cpu/bsd_aarch64/pauth_bsd_aarch64.inline.hpp
index a4d416d384e29f2d5daedd76611ce78cfc456e54..4d07bbef3033240d42775dd24ac0db729a7ba46d 100644
--- a/src/hotspot/os_cpu/bsd_aarch64/pauth_bsd_aarch64.inline.hpp
+++ b/src/hotspot/os_cpu/bsd_aarch64/pauth_bsd_aarch64.inline.hpp
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2021, Arm Limited. All rights reserved.
+ * Copyright (c) 2021, 2022, Arm Limited. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -25,29 +25,23 @@
#ifndef OS_CPU_BSD_AARCH64_PAUTH_BSD_AARCH64_INLINE_HPP
#define OS_CPU_BSD_AARCH64_PAUTH_BSD_AARCH64_INLINE_HPP
-#ifdef __APPLE__
-#include
-#endif
-
-// Only the PAC instructions in the NOP space can be used. This ensures the
-// binaries work on systems without PAC. Write these instructions using their
-// alternate "hint" instructions to ensure older compilers can still be used.
-// For Apple, use the provided interface as this may provide additional
-// optimization.
-
-#define XPACLRI "hint #0x7;"
+// OS specific Support for ROP Protection in VM code.
+// For more details on PAC see pauth_aarch64.hpp.
inline address pauth_strip_pointer(address ptr) {
-#ifdef __APPLE__
- return ptrauth_strip(ptr, ptrauth_key_asib);
-#else
- register address result __asm__("x30") = ptr;
- asm (XPACLRI : "+r"(result));
- return result;
-#endif
+ // No PAC support in BSD as of yet.
+ return ptr;
}
-#undef XPACLRI
+inline address pauth_sign_return_address(address ret_addr, address sp) {
+ // No PAC support in BSD as of yet.
+ return ret_addr;
+}
+
+inline address pauth_authenticate_return_address(address ret_addr, address sp) {
+ // No PAC support in BSD as of yet.
+ return ret_addr;
+}
#endif // OS_CPU_BSD_AARCH64_PAUTH_BSD_AARCH64_INLINE_HPP
diff --git a/src/hotspot/os_cpu/linux_aarch64/pauth_linux_aarch64.inline.hpp b/src/hotspot/os_cpu/linux_aarch64/pauth_linux_aarch64.inline.hpp
index 6f3fd41539c62d430633af1e464fdb058bd9f090..1eb1b92b9365ce23a16dc69351ae9e8815c64851 100644
--- a/src/hotspot/os_cpu/linux_aarch64/pauth_linux_aarch64.inline.hpp
+++ b/src/hotspot/os_cpu/linux_aarch64/pauth_linux_aarch64.inline.hpp
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2021, Arm Limited. All rights reserved.
+ * Copyright (c) 2021, 2022, Arm Limited. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -25,18 +25,57 @@
#ifndef OS_CPU_LINUX_AARCH64_PAUTH_LINUX_AARCH64_INLINE_HPP
#define OS_CPU_LINUX_AARCH64_PAUTH_LINUX_AARCH64_INLINE_HPP
-// Only the PAC instructions in the NOP space can be used. This ensures the
-// binaries work on systems without PAC. Write these instructions using their
-// alternate "hint" instructions to ensure older compilers can still be used.
+// OS specific Support for ROP Protection in VM code.
+// For more details on PAC see pauth_aarch64.hpp.
-#define XPACLRI "hint #0x7;"
+inline bool pauth_ptr_is_raw(address ptr);
+// Use only the PAC instructions in the NOP space. This ensures the binaries work on systems
+// without PAC. Write these instructions using their alternate "hint" instructions to ensure older
+// compilers can still be used.
+#define XPACLRI "hint #0x7;"
+#define PACIA1716 "hint #0x8;"
+#define AUTIA1716 "hint #0xc;"
+
+// Strip an address. Use with caution - only if there is no guaranteed way of authenticating the
+// value.
+//
inline address pauth_strip_pointer(address ptr) {
register address result __asm__("x30") = ptr;
asm (XPACLRI : "+r"(result));
return result;
}
+// Sign a return value, using the given modifier.
+//
+inline address pauth_sign_return_address(address ret_addr, address sp) {
+ if (VM_Version::use_rop_protection()) {
+ // A pointer cannot be double signed.
+ guarantee(pauth_ptr_is_raw(ret_addr), "Return address is already signed");
+ register address r17 __asm("r17") = ret_addr;
+ register address r16 __asm("r16") = sp;
+ asm (PACIA1716 : "+r"(r17) : "r"(r16));
+ ret_addr = r17;
+ }
+ return ret_addr;
+}
+
+// Authenticate a return value, using the given modifier.
+//
+inline address pauth_authenticate_return_address(address ret_addr, address sp) {
+ if (VM_Version::use_rop_protection()) {
+ register address r17 __asm("r17") = ret_addr;
+ register address r16 __asm("r16") = sp;
+ asm (AUTIA1716 : "+r"(r17) : "r"(r16));
+ ret_addr = r17;
+ // Ensure that the pointer authenticated.
+ guarantee(pauth_ptr_is_raw(ret_addr), "Return address did not authenticate");
+ }
+ return ret_addr;
+}
+
#undef XPACLRI
+#undef PACIA1716
+#undef AUTIA1716
#endif // OS_CPU_LINUX_AARCH64_PAUTH_LINUX_AARCH64_INLINE_HPP
diff --git a/src/hotspot/os_cpu/linux_aarch64/threadLS_linux_aarch64.S b/src/hotspot/os_cpu/linux_aarch64/threadLS_linux_aarch64.S
index f541844b9d6dfd9d230ad73d0f19184731139af6..ac60d6aa941689c1b74a9fa6758cf11dded26df7 100644
--- a/src/hotspot/os_cpu/linux_aarch64/threadLS_linux_aarch64.S
+++ b/src/hotspot/os_cpu/linux_aarch64/threadLS_linux_aarch64.S
@@ -1,4 +1,4 @@
-// Copyright (c) 2015, Red Hat Inc. All rights reserved.
+// Copyright (c) 2015, 2022, Red Hat Inc. All rights reserved.
// DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
//
// This code is free software; you can redistribute it and/or modify it
@@ -29,6 +29,7 @@
.type _ZN10JavaThread25aarch64_get_thread_helperEv, %function
_ZN10JavaThread25aarch64_get_thread_helperEv:
+ hint #0x19 // paciasp
stp x29, x30, [sp, -16]!
adrp x0, :tlsdesc:_ZN6Thread12_thr_currentE
ldr x1, [x0, #:tlsdesc_lo12:_ZN6Thread12_thr_currentE]
@@ -39,6 +40,7 @@ _ZN10JavaThread25aarch64_get_thread_helperEv:
add x0, x1, x0
ldr x0, [x0]
ldp x29, x30, [sp], 16
+ hint #0x1d // autiasp
ret
.size _ZN10JavaThread25aarch64_get_thread_helperEv, .-_ZN10JavaThread25aarch64_get_thread_helperEv
diff --git a/src/hotspot/os_cpu/linux_aarch64/vm_version_linux_aarch64.cpp b/src/hotspot/os_cpu/linux_aarch64/vm_version_linux_aarch64.cpp
index b5f5a0787e91ab611b1f9dbe0cb383c5796cec9e..b1080e77c908cf0bfdc63a77a42016e6d1a0de32 100644
--- a/src/hotspot/os_cpu/linux_aarch64/vm_version_linux_aarch64.cpp
+++ b/src/hotspot/os_cpu/linux_aarch64/vm_version_linux_aarch64.cpp
@@ -72,6 +72,10 @@
#define HWCAP_SVE (1 << 22)
#endif
+#ifndef HWCAP_PACA
+#define HWCAP_PACA (1 << 30)
+#endif
+
#ifndef HWCAP2_SVE2
#define HWCAP2_SVE2 (1 << 1)
#endif
@@ -111,6 +115,7 @@ void VM_Version::get_os_cpu_info() {
static_assert(CPU_SHA3 == HWCAP_SHA3, "Flag CPU_SHA3 must follow Linux HWCAP");
static_assert(CPU_SHA512 == HWCAP_SHA512, "Flag CPU_SHA512 must follow Linux HWCAP");
static_assert(CPU_SVE == HWCAP_SVE, "Flag CPU_SVE must follow Linux HWCAP");
+ static_assert(CPU_PACA == HWCAP_PACA, "Flag CPU_PACA must follow Linux HWCAP");
_features = auxv & (
HWCAP_FP |
HWCAP_ASIMD |
@@ -124,7 +129,8 @@ void VM_Version::get_os_cpu_info() {
HWCAP_DCPOP |
HWCAP_SHA3 |
HWCAP_SHA512 |
- HWCAP_SVE);
+ HWCAP_SVE |
+ HWCAP_PACA);
if (auxv2 & HWCAP2_SVE2) _features |= CPU_SVE2;
diff --git a/src/hotspot/os_cpu/linux_x86/os_linux_x86.cpp b/src/hotspot/os_cpu/linux_x86/os_linux_x86.cpp
index 5e346efee54e0279d03d4d2a7173600192208bf7..c6b945fdd7903e69ed661f5ca1778a511bbc253f 100644
--- a/src/hotspot/os_cpu/linux_x86/os_linux_x86.cpp
+++ b/src/hotspot/os_cpu/linux_x86/os_linux_x86.cpp
@@ -459,11 +459,26 @@ bool os::supports_sse() {
}
juint os::cpu_microcode_revision() {
+ // Note: this code runs on startup, and therefore should not be slow,
+ // see JDK-8283200.
+
juint result = 0;
- char data[2048] = {0}; // lines should fit in 2K buf
- size_t len = sizeof(data);
- FILE *fp = os::fopen("/proc/cpuinfo", "r");
+
+ // Attempt 1 (faster): Read the microcode version off the sysfs.
+ FILE *fp = os::fopen("/sys/devices/system/cpu/cpu0/microcode/version", "r");
+ if (fp) {
+ int read = fscanf(fp, "%x", &result);
+ fclose(fp);
+ if (read > 0) {
+ return result;
+ }
+ }
+
+ // Attempt 2 (slower): Read the microcode version off the procfs.
+ fp = os::fopen("/proc/cpuinfo", "r");
if (fp) {
+ char data[2048] = {0}; // lines should fit in 2K buf
+ size_t len = sizeof(data);
while (!feof(fp)) {
if (fgets(data, len, fp)) {
if (strstr(data, "microcode") != NULL) {
@@ -475,6 +490,7 @@ juint os::cpu_microcode_revision() {
}
fclose(fp);
}
+
return result;
}
diff --git a/src/hotspot/os_cpu/windows_aarch64/copy_windows_aarch64.hpp b/src/hotspot/os_cpu/windows_aarch64/copy_windows_aarch64.hpp
index 2d3c55cea39130770c733dfed20df332433af61f..ce2ad2d046f88ff6b65c4768eecd4e531a974114 100644
--- a/src/hotspot/os_cpu/windows_aarch64/copy_windows_aarch64.hpp
+++ b/src/hotspot/os_cpu/windows_aarch64/copy_windows_aarch64.hpp
@@ -1,5 +1,6 @@
/*
* Copyright (c) 2020, Microsoft Corporation. All rights reserved.
+ * Copyright (c) 2022, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -49,21 +50,7 @@ static void pd_disjoint_words(const HeapWord* from, HeapWord* to, size_t count)
}
static void pd_disjoint_words_atomic(const HeapWord* from, HeapWord* to, size_t count) {
- switch (count) {
- case 8: to[7] = from[7];
- case 7: to[6] = from[6];
- case 6: to[5] = from[5];
- case 5: to[4] = from[4];
- case 4: to[3] = from[3];
- case 3: to[2] = from[2];
- case 2: to[1] = from[1];
- case 1: to[0] = from[0];
- case 0: break;
- default: while (count-- > 0) {
- *to++ = *from++;
- }
- break;
- }
+ shared_disjoint_words_atomic(from, to, count);
}
static void pd_aligned_conjoint_words(const HeapWord* from, HeapWord* to, size_t count) {
diff --git a/src/hotspot/os_cpu/windows_aarch64/pauth_windows_aarch64.inline.hpp b/src/hotspot/os_cpu/windows_aarch64/pauth_windows_aarch64.inline.hpp
index 844291ee1e41231818704e1a9321632e26f98b50..6b5c9eecb72a49d8caafc4f5c186c47ff0caecc4 100644
--- a/src/hotspot/os_cpu/windows_aarch64/pauth_windows_aarch64.inline.hpp
+++ b/src/hotspot/os_cpu/windows_aarch64/pauth_windows_aarch64.inline.hpp
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2021, Arm Limited. All rights reserved.
+ * Copyright (c) 2021, 2022, Arm Limited. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -25,10 +25,22 @@
#ifndef OS_CPU_WINDOWS_AARCH64_PAUTH_WINDOWS_AARCH64_INLINE_HPP
#define OS_CPU_WINDOWS_AARCH64_PAUTH_WINDOWS_AARCH64_INLINE_HPP
+// OS specific Support for ROP Protection in VM code.
+// For more details on PAC see pauth_aarch64.hpp.
+
inline address pauth_strip_pointer(address ptr) {
// No PAC support in windows as of yet.
return ptr;
}
-#endif // OS_CPU_WINDOWS_AARCH64_PAUTH_WINDOWS_AARCH64_INLINE_HPP
+inline address pauth_sign_return_address(address ret_addr, address sp) {
+ // No PAC support in windows as of yet.
+ return ret_addr;
+}
+inline address pauth_authenticate_return_address(address ret_addr, address sp) {
+ // No PAC support in windows as of yet.
+ return ret_addr;
+}
+
+#endif // OS_CPU_WINDOWS_AARCH64_PAUTH_WINDOWS_AARCH64_INLINE_HPP
diff --git a/src/hotspot/share/adlc/formssel.cpp b/src/hotspot/share/adlc/formssel.cpp
index 274e623ea61872c9072d240a718f5a445064b3ed..0ae7b07507436bf982dd2b3fc65ec151d1e5824c 100644
--- a/src/hotspot/share/adlc/formssel.cpp
+++ b/src/hotspot/share/adlc/formssel.cpp
@@ -612,7 +612,7 @@ bool InstructForm::needs_anti_dependence_check(FormDict &globals) const {
strcmp(_matrule->_rChild->_opType,"StrEquals" )==0 ||
strcmp(_matrule->_rChild->_opType,"StrIndexOf" )==0 ||
strcmp(_matrule->_rChild->_opType,"StrIndexOfChar" )==0 ||
- strcmp(_matrule->_rChild->_opType,"HasNegatives" )==0 ||
+ strcmp(_matrule->_rChild->_opType,"CountPositives" )==0 ||
strcmp(_matrule->_rChild->_opType,"AryEq" )==0 ))
return true;
@@ -902,7 +902,7 @@ uint InstructForm::oper_input_base(FormDict &globals) {
strcmp(_matrule->_rChild->_opType,"StrCompressedCopy" )==0 ||
strcmp(_matrule->_rChild->_opType,"StrIndexOf")==0 ||
strcmp(_matrule->_rChild->_opType,"StrIndexOfChar")==0 ||
- strcmp(_matrule->_rChild->_opType,"HasNegatives")==0 ||
+ strcmp(_matrule->_rChild->_opType,"CountPositives")==0 ||
strcmp(_matrule->_rChild->_opType,"EncodeISOArray")==0)) {
// String.(compareTo/equals/indexOf) and Arrays.equals
// and sun.nio.cs.iso8859_1$Encoder.EncodeISOArray
diff --git a/src/hotspot/share/asm/register.hpp b/src/hotspot/share/asm/register.hpp
index 06a8735f52061c05682bfd6158de1049086eb233..b8538e4df6810330e02f798b8baa4404f4d80c87 100644
--- a/src/hotspot/share/asm/register.hpp
+++ b/src/hotspot/share/asm/register.hpp
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2000, 2021, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2000, 2022, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -28,6 +28,7 @@
#include "utilities/debug.hpp"
#include "utilities/globalDefinitions.hpp"
#include "utilities/macros.hpp"
+#include "utilities/population_count.hpp"
// Use AbstractRegister as shortcut
class AbstractRegisterImpl;
@@ -36,7 +37,7 @@ typedef AbstractRegisterImpl* AbstractRegister;
// The super class for platform specific registers. Instead of using value objects,
// registers are implemented as pointers. Subclassing is used so all registers can
-// use the debugging suport below. No virtual functions are used for efficiency.
+// use the debugging support below. No virtual functions are used for efficiency.
// They are canonicalized; i.e., registers are equal if their pointers are equal,
// and vice versa. A concrete implementation may just map the register onto 'this'.
@@ -86,8 +87,149 @@ const type name = ((type)value)
#define INTERNAL_VISIBILITY
#endif
+template class RegSetIterator;
+template class ReverseRegSetIterator;
-#define REGISTER_DEFINITION(type, name)
+// A set of registers
+template
+class AbstractRegSet {
+ uint32_t _bitset;
+
+ AbstractRegSet(uint32_t bitset) : _bitset(bitset) { }
+
+public:
+
+ AbstractRegSet() : _bitset(0) { }
+
+ AbstractRegSet(RegImpl r1) : _bitset(1 << r1->encoding()) { }
+
+ AbstractRegSet operator+(const AbstractRegSet aSet) const {
+ AbstractRegSet result(_bitset | aSet._bitset);
+ return result;
+ }
+
+ AbstractRegSet operator-(const AbstractRegSet aSet) const {
+ AbstractRegSet result(_bitset & ~aSet._bitset);
+ return result;
+ }
+
+ AbstractRegSet &operator+=(const AbstractRegSet aSet) {
+ *this = *this + aSet;
+ return *this;
+ }
+
+ AbstractRegSet &operator-=(const AbstractRegSet aSet) {
+ *this = *this - aSet;
+ return *this;
+ }
+
+ static AbstractRegSet of(RegImpl r1) {
+ return AbstractRegSet(r1);
+ }
+
+ static AbstractRegSet of(RegImpl r1, RegImpl r2) {
+ return of(r1) + r2;
+ }
+
+ static AbstractRegSet of(RegImpl r1, RegImpl r2, RegImpl r3) {
+ return of(r1, r2) + r3;
+ }
+
+ static AbstractRegSet of(RegImpl r1, RegImpl r2, RegImpl r3, RegImpl r4) {
+ return of(r1, r2, r3) + r4;
+ }
+
+ static AbstractRegSet range(RegImpl start, RegImpl end) {
+ assert(start <= end, "must be");
+ uint32_t bits = ~0;
+ bits <<= start->encoding();
+ bits <<= 31 - end->encoding();
+ bits >>= 31 - end->encoding();
+
+ return AbstractRegSet(bits);
+ }
+
+ uint size() const { return population_count(_bitset); }
+
+ uint32_t bits() const { return _bitset; }
+
+private:
+
+ RegImpl first();
+ RegImpl last();
+
+public:
+
+ friend class RegSetIterator;
+ friend class ReverseRegSetIterator;
+
+ RegSetIterator begin();
+ ReverseRegSetIterator rbegin();
+};
+
+template
+class RegSetIterator {
+ AbstractRegSet _regs;
+
+public:
+ RegSetIterator(AbstractRegSet x): _regs(x) {}
+ RegSetIterator(const RegSetIterator& mit) : _regs(mit._regs) {}
+
+ RegSetIterator& operator++() {
+ RegImpl r = _regs.first();
+ if (r->is_valid())
+ _regs -= r;
+ return *this;
+ }
+
+ bool operator==(const RegSetIterator& rhs) const {
+ return _regs.bits() == rhs._regs.bits();
+ }
+ bool operator!=(const RegSetIterator& rhs) const {
+ return ! (rhs == *this);
+ }
+
+ RegImpl operator*() {
+ return _regs.first();
+ }
+};
+
+template
+inline RegSetIterator AbstractRegSet::begin() {
+ return RegSetIterator(*this);
+}
+
+template
+class ReverseRegSetIterator {
+ AbstractRegSet _regs;
+
+public:
+ ReverseRegSetIterator(AbstractRegSet x): _regs(x) {}
+ ReverseRegSetIterator(const ReverseRegSetIterator& mit) : _regs(mit._regs) {}
+
+ ReverseRegSetIterator& operator++() {
+ RegImpl r = _regs.last();
+ if (r->is_valid())
+ _regs -= r;
+ return *this;
+ }
+
+ bool operator==(const ReverseRegSetIterator& rhs) const {
+ return _regs.bits() == rhs._regs.bits();
+ }
+ bool operator!=(const ReverseRegSetIterator& rhs) const {
+ return ! (rhs == *this);
+ }
+
+ RegImpl operator*() {
+ return _regs.last();
+ }
+};
+
+template
+inline ReverseRegSetIterator AbstractRegSet::rbegin() {
+ return ReverseRegSetIterator(*this);
+}
#include CPU_HEADER(register)
diff --git a/src/hotspot/share/c1/c1_Compilation.cpp b/src/hotspot/share/c1/c1_Compilation.cpp
index 27c20a9c5fe4242132a7c5ccf9ccc2f58fa58886..baabbbd147bb82b696435f03fd50bcb114c03833 100644
--- a/src/hotspot/share/c1/c1_Compilation.cpp
+++ b/src/hotspot/share/c1/c1_Compilation.cpp
@@ -77,7 +77,6 @@ static int totalInstructionNodes = 0;
class PhaseTraceTime: public TraceTime {
private:
- JavaThread* _thread;
CompileLog* _log;
TimerName _timer;
@@ -560,6 +559,7 @@ Compilation::Compilation(AbstractCompiler* compiler, ciEnv* env, ciMethod* metho
, _has_exception_handlers(false)
, _has_fpu_code(true) // pessimistic assumption
, _has_unsafe_access(false)
+, _has_irreducible_loops(false)
, _would_profile(false)
, _has_method_handle_invokes(false)
, _has_reserved_stack_access(method->has_reserved_stack_access())
diff --git a/src/hotspot/share/c1/c1_Compilation.hpp b/src/hotspot/share/c1/c1_Compilation.hpp
index f3be9ed7cee295410cfbc521dd1a415f3e899757..02a2f367df37a5930c4efa3df1d1cf954843fc70 100644
--- a/src/hotspot/share/c1/c1_Compilation.hpp
+++ b/src/hotspot/share/c1/c1_Compilation.hpp
@@ -77,6 +77,7 @@ class Compilation: public StackObj {
bool _has_exception_handlers;
bool _has_fpu_code;
bool _has_unsafe_access;
+ bool _has_irreducible_loops;
bool _would_profile;
bool _has_method_handle_invokes; // True if this method has MethodHandle invokes.
bool _has_reserved_stack_access;
@@ -135,6 +136,7 @@ class Compilation: public StackObj {
bool has_exception_handlers() const { return _has_exception_handlers; }
bool has_fpu_code() const { return _has_fpu_code; }
bool has_unsafe_access() const { return _has_unsafe_access; }
+ bool has_irreducible_loops() const { return _has_irreducible_loops; }
int max_vector_size() const { return 0; }
ciMethod* method() const { return _method; }
int osr_bci() const { return _osr_bci; }
@@ -162,6 +164,7 @@ class Compilation: public StackObj {
void set_has_exception_handlers(bool f) { _has_exception_handlers = f; }
void set_has_fpu_code(bool f) { _has_fpu_code = f; }
void set_has_unsafe_access(bool f) { _has_unsafe_access = f; }
+ void set_has_irreducible_loops(bool f) { _has_irreducible_loops = f; }
void set_would_profile(bool f) { _would_profile = f; }
void set_has_access_indexed(bool f) { _has_access_indexed = f; }
// Add a set of exception handlers covering the given PC offset
diff --git a/src/hotspot/share/c1/c1_GraphBuilder.cpp b/src/hotspot/share/c1/c1_GraphBuilder.cpp
index 1b58188d422cd255d1be1a4a7af413986092444b..6bb39e38f6920bc8b3544ee407301ba634e8158c 100644
--- a/src/hotspot/share/c1/c1_GraphBuilder.cpp
+++ b/src/hotspot/share/c1/c1_GraphBuilder.cpp
@@ -59,7 +59,7 @@ class BlockListBuilder {
// fields used by mark_loops
ResourceBitMap _active; // for iteration of control flow graph
ResourceBitMap _visited; // for iteration of control flow graph
- intArray _loop_map; // caches the information if a block is contained in a loop
+ GrowableArray _loop_map; // caches the information if a block is contained in a loop
int _next_loop_index; // next free loop number
int _next_block_number; // for reverse postorder numbering of blocks
@@ -84,7 +84,7 @@ class BlockListBuilder {
void make_loop_header(BlockBegin* block);
void mark_loops();
- int mark_loops(BlockBegin* b, bool in_subroutine);
+ BitMap& mark_loops(BlockBegin* b, bool in_subroutine);
// debugging
#ifndef PRODUCT
@@ -376,17 +376,36 @@ void BlockListBuilder::mark_loops() {
_active.initialize(BlockBegin::number_of_blocks());
_visited.initialize(BlockBegin::number_of_blocks());
- _loop_map = intArray(BlockBegin::number_of_blocks(), BlockBegin::number_of_blocks(), 0);
+ _loop_map = GrowableArray(BlockBegin::number_of_blocks(), BlockBegin::number_of_blocks(), ResourceBitMap());
+ for (int i = 0; i < BlockBegin::number_of_blocks(); i++) {
+ _loop_map.at(i).initialize(BlockBegin::number_of_blocks());
+ }
_next_loop_index = 0;
_next_block_number = _blocks.length();
- // recursively iterate the control flow graph
- mark_loops(_bci2block->at(0), false);
+ // The loop detection algorithm works as follows:
+ // - We maintain the _loop_map, where for each block we have a bitmap indicating which loops contain this block.
+ // - The CFG is recursively traversed (depth-first) and if we detect a loop, we assign the loop a unique number that is stored
+ // in the bitmap associated with the loop header block. Until we return back through that loop header the bitmap contains
+ // only a single bit corresponding to the loop number.
+ // - The bit is then propagated for all the blocks in the loop after we exit them (post-order). There could be multiple bits
+ // of course in case of nested loops.
+ // - When we exit the loop header we remove that single bit and assign the real loop state for it.
+ // - Now, the tricky part here is how we detect irriducible loops. In the algorithm above the loop state bits
+ // are propagated to the predecessors. If we encounter an irreducible loop (a loop with multiple heads) we would see
+ // a node with some loop bit set that would then propagate back and be never cleared because we would
+ // never go back through the original loop header. Therefore if there are any irreducible loops the bits in the states
+ // for these loops are going to propagate back to the root.
+ BitMap& loop_state = mark_loops(_bci2block->at(0), false);
+ if (!loop_state.is_empty()) {
+ compilation()->set_has_irreducible_loops(true);
+ }
assert(_next_block_number >= 0, "invalid block numbers");
// Remove dangling Resource pointers before the ResourceMark goes out-of-scope.
_active.resize(0);
_visited.resize(0);
+ _loop_map.clear();
}
void BlockListBuilder::make_loop_header(BlockBegin* block) {
@@ -398,19 +417,17 @@ void BlockListBuilder::make_loop_header(BlockBegin* block) {
if (!block->is_set(BlockBegin::parser_loop_header_flag)) {
block->set(BlockBegin::parser_loop_header_flag);
- assert(_loop_map.at(block->block_id()) == 0, "must not be set yet");
- assert(0 <= _next_loop_index && _next_loop_index < BitsPerInt, "_next_loop_index is used as a bit-index in integer");
- _loop_map.at_put(block->block_id(), 1 << _next_loop_index);
- if (_next_loop_index < 31) _next_loop_index++;
+ assert(_loop_map.at(block->block_id()).is_empty(), "must not be set yet");
+ assert(0 <= _next_loop_index && _next_loop_index < BlockBegin::number_of_blocks(), "_next_loop_index is too large");
+ _loop_map.at(block->block_id()).set_bit(_next_loop_index++);
} else {
// block already marked as loop header
- assert(is_power_of_2((unsigned int)_loop_map.at(block->block_id())), "exactly one bit must be set");
+ assert(_loop_map.at(block->block_id()).count_one_bits() == 1, "exactly one bit must be set");
}
}
-int BlockListBuilder::mark_loops(BlockBegin* block, bool in_subroutine) {
+BitMap& BlockListBuilder::mark_loops(BlockBegin* block, bool in_subroutine) {
int block_id = block->block_id();
-
if (_visited.at(block_id)) {
if (_active.at(block_id)) {
// reached block via backward branch
@@ -428,10 +445,11 @@ int BlockListBuilder::mark_loops(BlockBegin* block, bool in_subroutine) {
_visited.set_bit(block_id);
_active.set_bit(block_id);
- intptr_t loop_state = 0;
+ ResourceMark rm;
+ ResourceBitMap loop_state(BlockBegin::number_of_blocks());
for (int i = number_of_successors(block) - 1; i >= 0; i--) {
// recursively process all successors
- loop_state |= mark_loops(successor_at(block, i), in_subroutine);
+ loop_state.set_union(mark_loops(successor_at(block, i), in_subroutine));
}
// clear active-bit after all successors are processed
@@ -441,26 +459,22 @@ int BlockListBuilder::mark_loops(BlockBegin* block, bool in_subroutine) {
block->set_depth_first_number(_next_block_number);
_next_block_number--;
- if (loop_state != 0 || in_subroutine ) {
+ if (!loop_state.is_empty() || in_subroutine ) {
// block is contained at least in one loop, so phi functions are necessary
// phi functions are also necessary for all locals stored in a subroutine
scope()->requires_phi_function().set_union(block->stores_to_locals());
}
if (block->is_set(BlockBegin::parser_loop_header_flag)) {
- int header_loop_state = _loop_map.at(block_id);
- assert(is_power_of_2((unsigned)header_loop_state), "exactly one bit must be set");
-
- // If the highest bit is set (i.e. when integer value is negative), the method
- // has 32 or more loops. This bit is never cleared because it is used for multiple loops
- if (header_loop_state >= 0) {
- clear_bits(loop_state, header_loop_state);
- }
+ BitMap& header_loop_state = _loop_map.at(block_id);
+ assert(header_loop_state.count_one_bits() == 1, "exactly one bit must be set");
+ // remove the bit with the loop number for the state (header is outside of the loop)
+ loop_state.set_difference(header_loop_state);
}
// cache and return loop information for this block
- _loop_map.at_put(block_id, loop_state);
- return loop_state;
+ _loop_map.at(block_id).set_from(loop_state);
+ return _loop_map.at(block_id);
}
inline int BlockListBuilder::number_of_successors(BlockBegin* block)
@@ -953,7 +967,8 @@ void GraphBuilder::load_constant() {
}
Value x;
if (patch_state != NULL) {
- x = new Constant(t, patch_state);
+ bool kills_memory = stream()->is_dynamic_constant(); // arbitrary memory effects from running BSM during linkage
+ x = new Constant(t, patch_state, kills_memory);
} else {
x = new Constant(t);
}
@@ -2495,7 +2510,7 @@ XHandlers* GraphBuilder::handle_exception(Instruction* instruction) {
// The only test case we've seen so far which exhibits this
// problem is caught by the infinite recursion test in
// GraphBuilder::jsr() if the join doesn't work.
- if (!entry->try_merge(cur_state)) {
+ if (!entry->try_merge(cur_state, compilation()->has_irreducible_loops())) {
BAILOUT_("error while joining with exception handler, prob. due to complicated jsr/rets", exception_handlers);
}
@@ -2981,7 +2996,7 @@ BlockEnd* GraphBuilder::iterate_bytecodes_for_block(int bci) {
BlockBegin* sux = end->sux_at(i);
assert(sux->is_predecessor(block()), "predecessor missing");
// be careful, bailout if bytecodes are strange
- if (!sux->try_merge(end->state())) BAILOUT_("block join failed", NULL);
+ if (!sux->try_merge(end->state(), compilation()->has_irreducible_loops())) BAILOUT_("block join failed", NULL);
scope_data()->add_to_work_list(end->sux_at(i));
}
@@ -3135,7 +3150,7 @@ BlockBegin* GraphBuilder::setup_start_block(int osr_bci, BlockBegin* std_entry,
if (base->std_entry()->state() == NULL) {
// setup states for header blocks
- base->std_entry()->merge(state);
+ base->std_entry()->merge(state, compilation()->has_irreducible_loops());
}
assert(base->std_entry()->state() != NULL, "");
@@ -3218,7 +3233,7 @@ void GraphBuilder::setup_osr_entry_block() {
Goto* g = new Goto(target, false);
append(g);
_osr_entry->set_end(g);
- target->merge(_osr_entry->end()->state());
+ target->merge(_osr_entry->end()->state(), compilation()->has_irreducible_loops());
scope_data()->set_stream(NULL);
}
@@ -3277,7 +3292,7 @@ GraphBuilder::GraphBuilder(Compilation* compilation, IRScope* scope)
// setup state for std entry
_initial_state = state_at_entry();
- start_block->merge(_initial_state);
+ start_block->merge(_initial_state, compilation->has_irreducible_loops());
// End nulls still exist here
@@ -4028,7 +4043,7 @@ bool GraphBuilder::try_inline_full(ciMethod* callee, bool holder_known, bool ign
// the entry bci for the callee instead of the call site bci.
append_with_bci(goto_callee, 0);
_block->set_end(goto_callee);
- callee_start_block->merge(callee_state);
+ callee_start_block->merge(callee_state, compilation()->has_irreducible_loops());
_last = _block = callee_start_block;
diff --git a/src/hotspot/share/c1/c1_Instruction.cpp b/src/hotspot/share/c1/c1_Instruction.cpp
index 4b66796a543f02cb6026b6d1d8d51d08a4b4d434..75cb0c2ccd65ed4e96175ea53335a4f418442f6e 100644
--- a/src/hotspot/share/c1/c1_Instruction.cpp
+++ b/src/hotspot/share/c1/c1_Instruction.cpp
@@ -719,7 +719,7 @@ void BlockBegin::block_values_do(ValueVisitor* f) {
#endif
-bool BlockBegin::try_merge(ValueStack* new_state) {
+bool BlockBegin::try_merge(ValueStack* new_state, bool has_irreducible_loops) {
TRACE_PHI(tty->print_cr("********** try_merge for block B%d", block_id()));
// local variables used for state iteration
@@ -760,10 +760,9 @@ bool BlockBegin::try_merge(ValueStack* new_state) {
}
BitMap& requires_phi_function = new_state->scope()->requires_phi_function();
-
for_each_local_value(new_state, index, new_value) {
bool requires_phi = requires_phi_function.at(index) || (new_value->type()->is_double_word() && requires_phi_function.at(index + 1));
- if (requires_phi || !SelectivePhiFunctions) {
+ if (requires_phi || !SelectivePhiFunctions || has_irreducible_loops) {
new_state->setup_phi_for_local(this, index);
TRACE_PHI(tty->print_cr("creating phi-function %c%d for local %d", new_state->local_at(index)->type()->tchar(), new_state->local_at(index)->id(), index));
}
diff --git a/src/hotspot/share/c1/c1_Instruction.hpp b/src/hotspot/share/c1/c1_Instruction.hpp
index 1646557018f75496702cafd80b1020cc16fb1995..10bc7eb4fdf09ec4027d366ad58326b5af11e847 100644
--- a/src/hotspot/share/c1/c1_Instruction.hpp
+++ b/src/hotspot/share/c1/c1_Instruction.hpp
@@ -363,6 +363,7 @@ class Instruction: public CompilationResourceObj {
NeedsRangeCheckFlag,
InWorkListFlag,
DeoptimizeOnException,
+ KillsMemoryFlag,
InstructionLastFlag
};
@@ -718,13 +719,13 @@ LEAF(Constant, Instruction)
assert(type->is_constant(), "must be a constant");
}
- Constant(ValueType* type, ValueStack* state_before):
+ Constant(ValueType* type, ValueStack* state_before, bool kills_memory = false):
Instruction(type, state_before, /*type_is_constant*/ true)
{
assert(state_before != NULL, "only used for constants which need patching");
assert(type->is_constant(), "must be a constant");
- // since it's patching it needs to be pinned
- pin();
+ set_flag(KillsMemoryFlag, kills_memory);
+ pin(); // since it's patching it needs to be pinned
}
// generic
@@ -736,6 +737,8 @@ LEAF(Constant, Instruction)
virtual ciType* exact_type() const;
+ bool kills_memory() const { return check_flag(KillsMemoryFlag); }
+
enum CompareResult { not_comparable = -1, cond_false, cond_true };
virtual CompareResult compare(Instruction::Condition condition, Value right) const;
@@ -1776,8 +1779,11 @@ LEAF(BlockBegin, StateSplit)
int loop_index() const { return _loop_index; }
// merging
- bool try_merge(ValueStack* state); // try to merge states at block begin
- void merge(ValueStack* state) { bool b = try_merge(state); assert(b, "merge failed"); }
+ bool try_merge(ValueStack* state, bool has_irreducible_loops); // try to merge states at block begin
+ void merge(ValueStack* state, bool has_irreducible_loops) {
+ bool b = try_merge(state, has_irreducible_loops);
+ assert(b, "merge failed");
+ }
// debugging
void print_block() PRODUCT_RETURN;
diff --git a/src/hotspot/share/c1/c1_LIRAssembler.cpp b/src/hotspot/share/c1/c1_LIRAssembler.cpp
index be0a6abc2ca22f1fec53838ec73a3a68b06d3056..1c4e0d09306b52906385e1f16b44be4399f8bd99 100644
--- a/src/hotspot/share/c1/c1_LIRAssembler.cpp
+++ b/src/hotspot/share/c1/c1_LIRAssembler.cpp
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2000, 2021, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2000, 2022, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -32,7 +32,6 @@
#include "c1/c1_ValueStack.hpp"
#include "ci/ciInstance.hpp"
#include "compiler/oopMap.hpp"
-#include "gc/shared/barrierSet.hpp"
#include "runtime/os.hpp"
#include "runtime/vm_version.hpp"
@@ -104,7 +103,6 @@ PatchingStub::PatchID LIR_Assembler::patching_id(CodeEmitInfo* info) {
LIR_Assembler::LIR_Assembler(Compilation* c):
_masm(c->masm())
- , _bs(BarrierSet::barrier_set())
, _compilation(c)
, _frame_map(c->frame_map())
, _current_block(NULL)
diff --git a/src/hotspot/share/c1/c1_LIRAssembler.hpp b/src/hotspot/share/c1/c1_LIRAssembler.hpp
index f27ade60bae2869f06ae8a7d835a9f9ca8843ed1..1d873b9638da08d1493264b3620b006468b32e2f 100644
--- a/src/hotspot/share/c1/c1_LIRAssembler.hpp
+++ b/src/hotspot/share/c1/c1_LIRAssembler.hpp
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2000, 2021, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2000, 2022, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -32,13 +32,11 @@
class Compilation;
class ScopeValue;
-class BarrierSet;
class LIR_Assembler: public CompilationResourceObj {
private:
C1_MacroAssembler* _masm;
CodeStubList* _slow_case_stubs;
- BarrierSet* _bs;
Compilation* _compilation;
FrameMap* _frame_map;
diff --git a/src/hotspot/share/c1/c1_ValueMap.hpp b/src/hotspot/share/c1/c1_ValueMap.hpp
index 3ada748c67af3eb0808504c4d8b636c097ed843e..303ebba6c9d8a1bd5e0aec75ecc691fab9aa3d75 100644
--- a/src/hotspot/share/c1/c1_ValueMap.hpp
+++ b/src/hotspot/share/c1/c1_ValueMap.hpp
@@ -164,7 +164,12 @@ class ValueNumberingVisitor: public InstructionVisitor {
void do_Phi (Phi* x) { /* nothing to do */ }
void do_Local (Local* x) { /* nothing to do */ }
- void do_Constant (Constant* x) { /* nothing to do */ }
+ void do_Constant (Constant* x) {
+ if (x->kills_memory()) {
+ assert(x->can_trap(), "already linked");
+ kill_memory();
+ }
+ }
void do_LoadField (LoadField* x) {
if (x->is_init_point() || // getstatic is an initialization point so treat it as a wide kill
x->field()->is_volatile()) { // the JMM requires this
diff --git a/src/hotspot/share/cds/archiveBuilder.cpp b/src/hotspot/share/cds/archiveBuilder.cpp
index cb5c0aeb8c78008f044e0d009bddbd102fa38bbb..89529b2cf595de49207ec999afc316bdd68b2e3f 100644
--- a/src/hotspot/share/cds/archiveBuilder.cpp
+++ b/src/hotspot/share/cds/archiveBuilder.cpp
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2020, 2021, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2020, 2022, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -522,7 +522,8 @@ ArchiveBuilder::FollowMode ArchiveBuilder::get_follow_mode(MetaspaceClosure::Ref
if (MetaspaceShared::is_in_shared_metaspace(obj)) {
// Don't dump existing shared metadata again.
return point_to_it;
- } else if (ref->msotype() == MetaspaceObj::MethodDataType) {
+ } else if (ref->msotype() == MetaspaceObj::MethodDataType ||
+ ref->msotype() == MetaspaceObj::MethodCountersType) {
return set_to_null;
} else {
if (ref->msotype() == MetaspaceObj::ClassType) {
diff --git a/src/hotspot/share/cds/archiveUtils.hpp b/src/hotspot/share/cds/archiveUtils.hpp
index 588ad1b6da921152f1caaf4827504286d2e969ba..be8d8a0e84ed5add863781a91217db12e9fce2ee 100644
--- a/src/hotspot/share/cds/archiveUtils.hpp
+++ b/src/hotspot/share/cds/archiveUtils.hpp
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2019, 2021, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2019, 2022, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -30,6 +30,7 @@
#include "memory/virtualspace.hpp"
#include "utilities/bitMap.hpp"
#include "utilities/exceptions.hpp"
+#include "utilities/macros.hpp"
class BootstrapInfo;
class ReservedSpace;
@@ -147,7 +148,7 @@ public:
char* expand_top_to(char* newtop);
char* allocate(size_t num_bytes);
- void append_intptr_t(intptr_t n, bool need_to_mark = false);
+ void append_intptr_t(intptr_t n, bool need_to_mark = false) NOT_CDS_RETURN;
char* base() const { return _base; }
char* top() const { return _top; }
diff --git a/src/hotspot/share/cds/cdsHeapVerifier.cpp b/src/hotspot/share/cds/cdsHeapVerifier.cpp
new file mode 100644
index 0000000000000000000000000000000000000000..1eff911690110f9d8e0f1a59ab9f1fce92b82c9c
--- /dev/null
+++ b/src/hotspot/share/cds/cdsHeapVerifier.cpp
@@ -0,0 +1,305 @@
+/*
+ * Copyright (c) 2022, Oracle and/or its affiliates. All rights reserved.
+ * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
+ *
+ * This code is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License version 2 only, as
+ * published by the Free Software Foundation.
+ *
+ * This code is distributed in the hope that it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
+ * version 2 for more details (a copy is included in the LICENSE file that
+ * accompanied this code).
+ *
+ * You should have received a copy of the GNU General Public License version
+ * 2 along with this work; if not, write to the Free Software Foundation,
+ * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
+ *
+ * Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA
+ * or visit www.oracle.com if you need additional information or have any
+ * questions.
+ *
+ */
+
+#include "precompiled.hpp"
+#include "cds/archiveBuilder.hpp"
+#include "cds/cdsHeapVerifier.hpp"
+#include "classfile/classLoaderDataGraph.hpp"
+#include "classfile/javaClasses.inline.hpp"
+#include "logging/log.hpp"
+#include "logging/logStream.hpp"
+#include "memory/resourceArea.hpp"
+#include "oops/fieldStreams.inline.hpp"
+#include "oops/klass.inline.hpp"
+#include "oops/oop.inline.hpp"
+#include "runtime/fieldDescriptor.inline.hpp"
+
+#if INCLUDE_CDS_JAVA_HEAP
+
+// CDSHeapVerifier is used to check for problems where an archived object references a
+// static field that may be reinitialized at runtime. In the following example,
+// Foo.get.test()
+// correctly returns true when CDS disabled, but incorrectly returns false when CDS is enabled.
+//
+// class Foo {
+// final Foo archivedFoo; // this field is archived by CDS
+// Bar bar;
+// static {
+// CDS.initializeFromArchive(Foo.class);
+// if (archivedFoo == null) {
+// archivedFoo = new Foo();
+// archivedFoo.bar = Bar.bar;
+// }
+// }
+// static Foo get() { return archivedFoo; }
+// boolean test() {
+// return bar == Bar.bar;
+// }
+// }
+//
+// class Bar {
+// // this field is initialized in both CDS dump time and runtime.
+// static final Bar bar = new Bar();
+// }
+//
+// The check itself is simple:
+// [1] CDSHeapVerifier::do_klass() collects all static fields
+// [2] CDSHeapVerifier::do_entry() checks all the archived objects. None of them
+// should be in [1]
+//
+// However, it's legal for *some* static fields to be references. This leads to the
+// table of ADD_EXCL below.
+//
+// [A] In most of the cases, the module bootstrap code will update the static field
+// to point to part of the archived module graph. E.g.,
+// - java/lang/System::bootLayer
+// - jdk/internal/loader/ClassLoaders::BOOT_LOADER
+// [B] A final static String that's explicitly initialized inside , but
+// its value is deterministic and is always the same string literal.
+// [C] A non-final static string that is assigned a string literal during class
+// initialization; this string is never changed during -Xshare:dump.
+// [D] Simple caches whose value doesn't matter.
+// [E] Other cases (see comments in-line below).
+
+CDSHeapVerifier::CDSHeapVerifier() : _archived_objs(0), _problems(0)
+{
+# define ADD_EXCL(...) { static const char* e[] = {__VA_ARGS__, NULL}; add_exclusion(e); }
+
+ // Unfortunately this needs to be manually maintained. If
+ // test/hotspot/jtreg/runtime/cds/appcds/cacheObject/ArchivedEnumTest.java fails,
+ // you might need to fix the core library code, or fix the ADD_EXCL entries below.
+ //
+ // class field type
+ ADD_EXCL("java/lang/ClassLoader", "scl"); // A
+ ADD_EXCL("java/lang/invoke/InvokerBytecodeGenerator", "DONTINLINE_SIG", // B
+ "FORCEINLINE_SIG", // B
+ "HIDDEN_SIG", // B
+ "INJECTEDPROFILE_SIG", // B
+ "LF_COMPILED_SIG"); // B
+ ADD_EXCL("java/lang/Module", "ALL_UNNAMED_MODULE", // A
+ "ALL_UNNAMED_MODULE_SET", // A
+ "EVERYONE_MODULE", // A
+ "EVERYONE_SET"); // A
+ ADD_EXCL("java/lang/System", "bootLayer"); // A
+ ADD_EXCL("java/lang/VersionProps", "VENDOR_URL_BUG", // C
+ "VENDOR_URL_VM_BUG", // C
+ "VENDOR_VERSION"); // C
+ ADD_EXCL("java/net/URL$DefaultFactory", "PREFIX"); // B FIXME: JDK-8276561
+
+ // A dummy object used by HashSet. The value doesn't matter and it's never
+ // tested for equality.
+ ADD_EXCL("java/util/HashSet", "PRESENT"); // E
+ ADD_EXCL("jdk/internal/loader/BuiltinClassLoader", "packageToModule"); // A
+ ADD_EXCL("jdk/internal/loader/ClassLoaders", "BOOT_LOADER", // A
+ "APP_LOADER", // A
+ "PLATFORM_LOADER"); // A
+ ADD_EXCL("jdk/internal/loader/URLClassPath", "JAVA_VERSION"); // B
+ ADD_EXCL("jdk/internal/module/Builder", "cachedVersion"); // D
+ ADD_EXCL("jdk/internal/module/ModuleLoaderMap$Mapper", "APP_CLASSLOADER", // A
+ "APP_LOADER_INDEX", // A
+ "PLATFORM_CLASSLOADER", // A
+ "PLATFORM_LOADER_INDEX"); // A
+ ADD_EXCL("jdk/internal/module/ServicesCatalog", "CLV"); // A
+
+ // This just points to an empty Map
+ ADD_EXCL("jdk/internal/reflect/Reflection", "methodFilterMap"); // E
+ ADD_EXCL("jdk/internal/util/StaticProperty", "FILE_ENCODING"); // C
+
+ // Integer for 0 and 1 are in java/lang/Integer$IntegerCache and are archived
+ ADD_EXCL("sun/invoke/util/ValueConversions", "ONE_INT", // E
+ "ZERO_INT"); // E
+ ADD_EXCL("sun/security/util/SecurityConstants", "PROVIDER_VER"); // C
+
+
+# undef ADD_EXCL
+
+ ClassLoaderDataGraph::classes_do(this);
+}
+
+CDSHeapVerifier::~CDSHeapVerifier() {
+ if (_problems > 0) {
+ log_warning(cds, heap)("Scanned %d objects. Found %d case(s) where "
+ "an object points to a static field that may be "
+ "reinitialized at runtime.", _archived_objs, _problems);
+ }
+}
+
+class CDSHeapVerifier::CheckStaticFields : public FieldClosure {
+ CDSHeapVerifier* _verifier;
+ InstanceKlass* _ik;
+ const char** _exclusions;
+public:
+ CheckStaticFields(CDSHeapVerifier* verifier, InstanceKlass* ik)
+ : _verifier(verifier), _ik(ik) {
+ _exclusions = _verifier->find_exclusion(_ik);
+ }
+
+ void do_field(fieldDescriptor* fd) {
+ if (fd->field_type() != T_OBJECT) {
+ return;
+ }
+
+ oop static_obj_field = _ik->java_mirror()->obj_field(fd->offset());
+ if (static_obj_field != NULL) {
+ Klass* klass = static_obj_field->klass();
+ if (_exclusions != NULL) {
+ for (const char** p = _exclusions; *p != NULL; p++) {
+ if (fd->name()->equals(*p)) {
+ return;
+ }
+ }
+ }
+
+ if (fd->is_final() && java_lang_String::is_instance(static_obj_field) && fd->has_initial_value()) {
+ // This field looks like like this in the Java source:
+ // static final SOME_STRING = "a string literal";
+ // This string literal has been stored in the shared string table, so it's OK
+ // for the archived objects to refer to it.
+ return;
+ }
+ if (fd->is_final() && java_lang_Class::is_instance(static_obj_field)) {
+ // This field points to an archived mirror.
+ return;
+ }
+ if (klass->has_archived_enum_objs()) {
+ // This klass is a subclass of java.lang.Enum. If any instance of this klass
+ // has been archived, we will archive all static fields of this klass.
+ // See HeapShared::initialize_enum_klass().
+ return;
+ }
+
+ // This field *may* be initialized to a different value at runtime. Remember it
+ // and check later if it appears in the archived object graph.
+ _verifier->add_static_obj_field(_ik, static_obj_field, fd->name());
+ }
+ }
+};
+
+// Remember all the static object fields of every class that are currently
+// loaded.
+void CDSHeapVerifier::do_klass(Klass* k) {
+ if (k->is_instance_klass()) {
+ InstanceKlass* ik = InstanceKlass::cast(k);
+
+ if (HeapShared::is_subgraph_root_class(ik)) {
+ // ik is inside one of the ArchivableStaticFieldInfo tables
+ // in heapShared.cpp. We assume such classes are programmed to
+ // update their static fields correctly at runtime.
+ return;
+ }
+
+ CheckStaticFields csf(this, ik);
+ ik->do_local_static_fields(&csf);
+ }
+}
+
+void CDSHeapVerifier::add_static_obj_field(InstanceKlass* ik, oop field, Symbol* name) {
+ StaticFieldInfo info = {ik, name};
+ _table.put(field, info);
+}
+
+inline bool CDSHeapVerifier::do_entry(oop& orig_obj, HeapShared::CachedOopInfo& value) {
+ _archived_objs++;
+
+ StaticFieldInfo* info = _table.get(orig_obj);
+ if (info != NULL) {
+ ResourceMark rm;
+ LogStream ls(Log(cds, heap)::warning());
+ ls.print_cr("Archive heap points to a static field that may be reinitialized at runtime:");
+ ls.print_cr("Field: %s::%s", info->_holder->name()->as_C_string(), info->_name->as_C_string());
+ ls.print("Value: ");
+ orig_obj->print_on(&ls);
+ ls.print_cr("--- trace begin ---");
+ trace_to_root(orig_obj, NULL, &value);
+ ls.print_cr("--- trace end ---");
+ ls.cr();
+ _problems ++;
+ }
+
+ return true; /* keep on iterating */
+}
+
+class CDSHeapVerifier::TraceFields : public FieldClosure {
+ oop _orig_obj;
+ oop _orig_field;
+ LogStream* _ls;
+
+public:
+ TraceFields(oop orig_obj, oop orig_field, LogStream* ls)
+ : _orig_obj(orig_obj), _orig_field(orig_field), _ls(ls) {}
+
+ void do_field(fieldDescriptor* fd) {
+ if (fd->field_type() == T_OBJECT || fd->field_type() == T_ARRAY) {
+ oop obj_field = _orig_obj->obj_field(fd->offset());
+ if (obj_field == _orig_field) {
+ _ls->print("::%s (offset = %d)", fd->name()->as_C_string(), fd->offset());
+ }
+ }
+ }
+};
+
+// Hint: to exercise this function, uncomment out one of the ADD_EXCL lines above.
+int CDSHeapVerifier::trace_to_root(oop orig_obj, oop orig_field, HeapShared::CachedOopInfo* p) {
+ int level = 0;
+ LogStream ls(Log(cds, heap)::warning());
+ if (p->_referrer != NULL) {
+ HeapShared::CachedOopInfo* ref = HeapShared::archived_object_cache()->get(p->_referrer);
+ assert(ref != NULL, "sanity");
+ level = trace_to_root(p->_referrer, orig_obj, ref) + 1;
+ } else if (java_lang_String::is_instance(orig_obj)) {
+ ls.print_cr("[%2d] (shared string table)", level++);
+ }
+ Klass* k = orig_obj->klass();
+ ResourceMark rm;
+ ls.print("[%2d] ", level);
+ orig_obj->print_address_on(&ls);
+ ls.print(" %s", k->internal_name());
+ if (orig_field != NULL) {
+ if (k->is_instance_klass()) {
+ TraceFields clo(orig_obj, orig_field, &ls);;
+ InstanceKlass::cast(k)->do_nonstatic_fields(&clo);
+ } else {
+ assert(orig_obj->is_objArray(), "must be");
+ objArrayOop array = (objArrayOop)orig_obj;
+ for (int i = 0; i < array->length(); i++) {
+ if (array->obj_at(i) == orig_field) {
+ ls.print(" @[%d]", i);
+ break;
+ }
+ }
+ }
+ }
+ ls.cr();
+
+ return level;
+}
+
+#ifdef ASSERT
+void CDSHeapVerifier::verify() {
+ CDSHeapVerifier verf;
+ HeapShared::archived_object_cache()->iterate(&verf);
+}
+#endif
+
+#endif // INCLUDE_CDS_JAVA_HEAP
diff --git a/src/hotspot/share/cds/cdsHeapVerifier.hpp b/src/hotspot/share/cds/cdsHeapVerifier.hpp
new file mode 100644
index 0000000000000000000000000000000000000000..830e41ae03db77b1e1d4472c778b7125801308d3
--- /dev/null
+++ b/src/hotspot/share/cds/cdsHeapVerifier.hpp
@@ -0,0 +1,89 @@
+/*
+ * Copyright (c) 2022, Oracle and/or its affiliates. All rights reserved.
+ * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
+ *
+ * This code is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License version 2 only, as
+ * published by the Free Software Foundation.
+ *
+ * This code is distributed in the hope that it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
+ * version 2 for more details (a copy is included in the LICENSE file that
+ * accompanied this code).
+ *
+ * You should have received a copy of the GNU General Public License version
+ * 2 along with this work; if not, write to the Free Software Foundation,
+ * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
+ *
+ * Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA
+ * or visit www.oracle.com if you need additional information or have any
+ * questions.
+ *
+ */
+
+#ifndef SHARED_CDS_CDSHEAPVERIFIER_HPP
+#define SHARED_CDS_CDSHEAPVERIFIER_HPP
+
+#include "cds/heapShared.hpp"
+#include "memory/iterator.hpp"
+#include "utilities/growableArray.hpp"
+#include "utilities/resourceHash.hpp"
+
+class InstanceKlass;
+class Symbol;
+
+#if INCLUDE_CDS_JAVA_HEAP
+
+class CDSHeapVerifier : public KlassClosure {
+ class CheckStaticFields;
+ class TraceFields;
+
+ int _archived_objs;
+ int _problems;
+
+ struct StaticFieldInfo {
+ InstanceKlass* _holder;
+ Symbol* _name;
+ };
+
+ ResourceHashtable _table;
+
+ GrowableArray _exclusions;
+
+ void add_exclusion(const char** excl) {
+ _exclusions.append(excl);
+ }
+ void add_static_obj_field(InstanceKlass* ik, oop field, Symbol* name);
+
+ const char** find_exclusion(InstanceKlass* ik) {
+ for (int i = 0; i < _exclusions.length(); i++) {
+ const char** excl = _exclusions.at(i);
+ if (ik->name()->equals(excl[0])) {
+ return &excl[1];
+ }
+ }
+ return NULL;
+ }
+ int trace_to_root(oop orig_obj, oop orig_field, HeapShared::CachedOopInfo* p);
+
+ CDSHeapVerifier();
+ ~CDSHeapVerifier();
+
+public:
+
+ // Overrides KlassClosure::do_klass()
+ virtual void do_klass(Klass* k);
+
+ // For ResourceHashtable::iterate()
+ inline bool do_entry(oop& orig_obj, HeapShared::CachedOopInfo& value);
+
+ static void verify() NOT_DEBUG_RETURN;
+};
+
+#endif // INCLUDE_CDS_JAVA_HEAP
+#endif // SHARED_CDS_CDSHEAPVERIFIER_HPP
diff --git a/src/hotspot/share/cds/dumpTimeClassInfo.cpp b/src/hotspot/share/cds/dumpTimeClassInfo.cpp
index 77225969b1a4db35d5d9753aa6cff31cf9be1353..ac35e6583b4c4cfb2295b3c9fa7066f17dfe6d20 100644
--- a/src/hotspot/share/cds/dumpTimeClassInfo.cpp
+++ b/src/hotspot/share/cds/dumpTimeClassInfo.cpp
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2021, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2021, 2022, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -25,6 +25,7 @@
#include "precompiled.hpp"
#include "cds/archiveBuilder.hpp"
#include "cds/dumpTimeClassInfo.inline.hpp"
+#include "cds/runTimeClassInfo.hpp"
#include "classfile/classLoader.hpp"
#include "classfile/classLoaderData.inline.hpp"
#include "classfile/systemDictionaryShared.hpp"
@@ -45,6 +46,7 @@ DumpTimeClassInfo DumpTimeClassInfo::clone() {
clone._verifier_constraints = NULL;
clone._verifier_constraint_flags = NULL;
clone._loader_constraints = NULL;
+ clone._enum_klass_static_fields = NULL;
int clone_num_verifier_constraints = num_verifier_constraints();
if (clone_num_verifier_constraints > 0) {
clone._verifier_constraints = new (ResourceObj::C_HEAP, mtClass) GrowableArray(clone_num_verifier_constraints, mtClass);
@@ -61,9 +63,16 @@ DumpTimeClassInfo DumpTimeClassInfo::clone() {
clone._loader_constraints->append(_loader_constraints->at(i));
}
}
+ assert(_enum_klass_static_fields == NULL, "This should not happen with jcmd VM.cds dumping");
return clone;
}
+size_t DumpTimeClassInfo::runtime_info_bytesize() const {
+ return RunTimeClassInfo::byte_size(_klass, num_verifier_constraints(),
+ num_loader_constraints(),
+ num_enum_klass_static_fields());
+}
+
void DumpTimeClassInfo::add_verification_constraint(InstanceKlass* k, Symbol* name,
Symbol* from_name, bool from_field_is_protected, bool from_is_array, bool from_is_object) {
if (_verifier_constraints == NULL) {
@@ -144,6 +153,18 @@ void DumpTimeClassInfo::record_linking_constraint(Symbol* name, Handle loader1,
}
}
+void DumpTimeClassInfo::add_enum_klass_static_field(int archived_heap_root_index) {
+ if (_enum_klass_static_fields == NULL) {
+ _enum_klass_static_fields = new (ResourceObj::C_HEAP, mtClass) GrowableArray(20, mtClass);
+ }
+ _enum_klass_static_fields->append(archived_heap_root_index);
+}
+
+int DumpTimeClassInfo::enum_klass_static_field(int which_field) {
+ assert(_enum_klass_static_fields != NULL, "must be");
+ return _enum_klass_static_fields->at(which_field);
+}
+
bool DumpTimeClassInfo::is_builtin() {
return SystemDictionaryShared::is_builtin(_klass);
}
diff --git a/src/hotspot/share/cds/dumpTimeClassInfo.hpp b/src/hotspot/share/cds/dumpTimeClassInfo.hpp
index 28fe986ff7a32c72a664f044bfd23ed658c47f44..5b4f5cd9b9beb1f494c5a2a68b8fba2aae757657 100644
--- a/src/hotspot/share/cds/dumpTimeClassInfo.hpp
+++ b/src/hotspot/share/cds/dumpTimeClassInfo.hpp
@@ -77,6 +77,7 @@ public:
GrowableArray* _verifier_constraints;
GrowableArray* _verifier_constraint_flags;
GrowableArray* _loader_constraints;
+ GrowableArray* _enum_klass_static_fields;
DumpTimeClassInfo() {
_klass = NULL;
@@ -92,28 +93,38 @@ public:
_verifier_constraints = NULL;
_verifier_constraint_flags = NULL;
_loader_constraints = NULL;
+ _enum_klass_static_fields = NULL;
}
void add_verification_constraint(InstanceKlass* k, Symbol* name,
Symbol* from_name, bool from_field_is_protected, bool from_is_array, bool from_is_object);
void record_linking_constraint(Symbol* name, Handle loader1, Handle loader2);
-
+ void add_enum_klass_static_field(int archived_heap_root_index);
+ int enum_klass_static_field(int which_field);
bool is_builtin();
- int num_verifier_constraints() {
- if (_verifier_constraint_flags != NULL) {
- return _verifier_constraint_flags->length();
- } else {
+private:
+ template
+ static int array_length_or_zero(GrowableArray* array) {
+ if (array == NULL) {
return 0;
+ } else {
+ return array->length();
}
}
- int num_loader_constraints() {
- if (_loader_constraints != NULL) {
- return _loader_constraints->length();
- } else {
- return 0;
- }
+public:
+
+ int num_verifier_constraints() const {
+ return array_length_or_zero(_verifier_constraint_flags);
+ }
+
+ int num_loader_constraints() const {
+ return array_length_or_zero(_loader_constraints);
+ }
+
+ int num_enum_klass_static_fields() const {
+ return array_length_or_zero(_enum_klass_static_fields);
}
void metaspace_pointers_do(MetaspaceClosure* it) {
@@ -151,11 +162,13 @@ public:
void set_failed_verification() { _failed_verification = true; }
InstanceKlass* nest_host() const { return _nest_host; }
void set_nest_host(InstanceKlass* nest_host) { _nest_host = nest_host; }
+
DumpTimeClassInfo clone();
+ size_t runtime_info_bytesize() const;
};
-
-inline unsigned DumpTimeSharedClassTable_hash(InstanceKlass* const& k) {
+template
+inline unsigned DumpTimeSharedClassTable_hash(T* const& k) {
if (DumpSharedSpaces) {
// Deterministic archive contents
uintx delta = k->name() - MetaspaceShared::symbol_rs_base();
@@ -163,7 +176,7 @@ inline unsigned DumpTimeSharedClassTable_hash(InstanceKlass* const& k) {
} else {
// Deterministic archive is not possible because classes can be loaded
// in multiple threads.
- return primitive_hash(k);
+ return primitive_hash(k);
}
}
diff --git a/src/hotspot/share/cds/filemap.cpp b/src/hotspot/share/cds/filemap.cpp
index 69ab7430e16d881181ce530c2466ecbfabfe7982..b8e28f30b72bf21aa8c25194fa887da73e99194f 100644
--- a/src/hotspot/share/cds/filemap.cpp
+++ b/src/hotspot/share/cds/filemap.cpp
@@ -1108,6 +1108,9 @@ public:
}
~FileHeaderHelper() {
+ if (_header != nullptr) {
+ FREE_C_HEAP_ARRAY(char, _header);
+ }
if (_fd != -1) {
::close(_fd);
}
@@ -1994,7 +1997,7 @@ void FileMapInfo::map_or_load_heap_regions() {
} else if (HeapShared::can_load()) {
success = HeapShared::load_heap_regions(this);
} else {
- log_info(cds)("Cannot use CDS heap data. UseEpsilonGC, UseG1GC or UseSerialGC are required.");
+ log_info(cds)("Cannot use CDS heap data. UseEpsilonGC, UseG1GC, UseSerialGC or UseParallelGC are required.");
}
}
diff --git a/src/hotspot/share/cds/heapShared.cpp b/src/hotspot/share/cds/heapShared.cpp
index 922eab124af2baef29ff52e0a3eefb6df9912d23..14bca5d74b62bce1074da9f76694faaa04a7bfb2 100644
--- a/src/hotspot/share/cds/heapShared.cpp
+++ b/src/hotspot/share/cds/heapShared.cpp
@@ -25,6 +25,7 @@
#include "precompiled.hpp"
#include "cds/archiveBuilder.hpp"
#include "cds/archiveUtils.hpp"
+#include "cds/cdsHeapVerifier.hpp"
#include "cds/filemap.hpp"
#include "cds/heapShared.inline.hpp"
#include "cds/metaspaceShared.hpp"
@@ -42,7 +43,6 @@
#include "gc/shared/gcLocker.hpp"
#include "gc/shared/gcVMOperations.hpp"
#include "logging/log.hpp"
-#include "logging/logMessage.hpp"
#include "logging/logStream.hpp"
#include "memory/iterator.inline.hpp"
#include "memory/metadataFactory.hpp"
@@ -143,11 +143,24 @@ bool HeapShared::is_archived_object_during_dumptime(oop p) {
}
#endif
-////////////////////////////////////////////////////////////////
-//
-// Java heap object archiving support
-//
-////////////////////////////////////////////////////////////////
+static bool is_subgraph_root_class_of(ArchivableStaticFieldInfo fields[], int num, InstanceKlass* ik) {
+ for (int i = 0; i < num; i++) {
+ if (fields[i].klass == ik) {
+ return true;
+ }
+ }
+ return false;
+}
+
+bool HeapShared::is_subgraph_root_class(InstanceKlass* ik) {
+ return is_subgraph_root_class_of(closed_archive_subgraph_entry_fields,
+ num_closed_archive_subgraph_entry_fields, ik) ||
+ is_subgraph_root_class_of(open_archive_subgraph_entry_fields,
+ num_open_archive_subgraph_entry_fields, ik) ||
+ is_subgraph_root_class_of(fmg_open_archive_subgraph_entry_fields,
+ num_fmg_open_archive_subgraph_entry_fields, ik);
+}
+
void HeapShared::fixup_regions() {
FileMapInfo* mapinfo = FileMapInfo::current_info();
if (is_mapped()) {
@@ -203,9 +216,9 @@ HeapShared::ArchivedObjectCache* HeapShared::_archived_object_cache = NULL;
oop HeapShared::find_archived_heap_object(oop obj) {
assert(DumpSharedSpaces, "dump-time only");
ArchivedObjectCache* cache = archived_object_cache();
- oop* p = cache->get(obj);
+ CachedOopInfo* p = cache->get(obj);
if (p != NULL) {
- return *p;
+ return p->_obj;
} else {
return NULL;
}
@@ -302,7 +315,8 @@ oop HeapShared::archive_object(oop obj) {
assert(hash_original == hash_archived, "Different hash codes: original %x, archived %x", hash_original, hash_archived);
ArchivedObjectCache* cache = archived_object_cache();
- cache->put(obj, archived_oop);
+ CachedOopInfo info = make_cached_oop_info(archived_oop);
+ cache->put(obj, info);
if (log_is_enabled(Debug, cds, heap)) {
ResourceMark rm;
log_debug(cds, heap)("Archived heap object " PTR_FORMAT " ==> " PTR_FORMAT " : %s",
@@ -336,6 +350,94 @@ void HeapShared::archive_klass_objects() {
}
}
+// -- Handling of Enum objects
+// Java Enum classes have synthetic methods that look like this
+// enum MyEnum {FOO, BAR}
+// MyEnum:: {
+// /*static final MyEnum*/ MyEnum::FOO = new MyEnum("FOO");
+// /*static final MyEnum*/ MyEnum::BAR = new MyEnum("BAR");
+// }
+//
+// If MyEnum::FOO object is referenced by any of the archived subgraphs, we must
+// ensure the archived value equals (in object address) to the runtime value of
+// MyEnum::FOO.
+//
+// However, since MyEnum:: is synthetically generated by javac, there's
+// no way of programatically handling this inside the Java code (as you would handle
+// ModuleLayer::EMPTY_LAYER, for example).
+//
+// Instead, we archive all static field of such Enum classes. At runtime,
+// HeapShared::initialize_enum_klass() will skip the method and pull
+// the static fields out of the archived heap.
+void HeapShared::check_enum_obj(int level,
+ KlassSubGraphInfo* subgraph_info,
+ oop orig_obj,
+ bool is_closed_archive) {
+ Klass* k = orig_obj->klass();
+ Klass* relocated_k = ArchiveBuilder::get_relocated_klass(k);
+ if (!k->is_instance_klass()) {
+ return;
+ }
+ InstanceKlass* ik = InstanceKlass::cast(k);
+ if (ik->java_super() == vmClasses::Enum_klass() && !ik->has_archived_enum_objs()) {
+ ResourceMark rm;
+ ik->set_has_archived_enum_objs();
+ relocated_k->set_has_archived_enum_objs();
+ oop mirror = ik->java_mirror();
+
+ for (JavaFieldStream fs(ik); !fs.done(); fs.next()) {
+ if (fs.access_flags().is_static()) {
+ fieldDescriptor& fd = fs.field_descriptor();
+ if (fd.field_type() != T_OBJECT && fd.field_type() != T_ARRAY) {
+ guarantee(false, "static field %s::%s must be T_OBJECT or T_ARRAY",
+ ik->external_name(), fd.name()->as_C_string());
+ }
+ oop oop_field = mirror->obj_field(fd.offset());
+ if (oop_field == NULL) {
+ guarantee(false, "static field %s::%s must not be null",
+ ik->external_name(), fd.name()->as_C_string());
+ } else if (oop_field->klass() != ik && oop_field->klass() != ik->array_klass_or_null()) {
+ guarantee(false, "static field %s::%s is of the wrong type",
+ ik->external_name(), fd.name()->as_C_string());
+ }
+ oop archived_oop_field = archive_reachable_objects_from(level, subgraph_info, oop_field, is_closed_archive);
+ int root_index = append_root(archived_oop_field);
+ log_info(cds, heap)("Archived enum obj @%d %s::%s (" INTPTR_FORMAT " -> " INTPTR_FORMAT ")",
+ root_index, ik->external_name(), fd.name()->as_C_string(),
+ p2i((oopDesc*)oop_field), p2i((oopDesc*)archived_oop_field));
+ SystemDictionaryShared::add_enum_klass_static_field(ik, root_index);
+ }
+ }
+ }
+}
+
+// See comments in HeapShared::check_enum_obj()
+bool HeapShared::initialize_enum_klass(InstanceKlass* k, TRAPS) {
+ if (!is_fully_available()) {
+ return false;
+ }
+
+ RunTimeClassInfo* info = RunTimeClassInfo::get_for(k);
+ assert(info != NULL, "sanity");
+
+ if (log_is_enabled(Info, cds, heap)) {
+ ResourceMark rm;
+ log_info(cds, heap)("Initializing Enum class: %s", k->external_name());
+ }
+
+ oop mirror = k->java_mirror();
+ int i = 0;
+ for (JavaFieldStream fs(k); !fs.done(); fs.next()) {
+ if (fs.access_flags().is_static()) {
+ int root_index = info->enum_klass_static_field_root_index_at(i++);
+ fieldDescriptor& fd = fs.field_descriptor();
+ assert(fd.field_type() == T_OBJECT || fd.field_type() == T_ARRAY, "must be");
+ mirror->obj_field_put(fd.offset(), get_root(root_index, /*clear=*/true));
+ }
+ }
+ return true;
+}
+
void HeapShared::run_full_gc_in_vm_thread() {
if (HeapShared::can_write()) {
// Avoid fragmentation while archiving heap objects.
@@ -377,6 +479,7 @@ void HeapShared::archive_objects(GrowableArray* closed_regions,
log_info(cds)("Dumping objects to open archive heap region ...");
copy_open_objects(open_regions);
+ CDSHeapVerifier::verify();
destroy_archived_object_cache();
}
@@ -471,7 +574,7 @@ KlassSubGraphInfo* HeapShared::init_subgraph_info(Klass* k, bool is_full_module_
bool created;
Klass* relocated_k = ArchiveBuilder::get_relocated_klass(k);
KlassSubGraphInfo* info =
- _dump_time_subgraph_info_table->put_if_absent(relocated_k, KlassSubGraphInfo(relocated_k, is_full_module_graph),
+ _dump_time_subgraph_info_table->put_if_absent(k, KlassSubGraphInfo(relocated_k, is_full_module_graph),
&created);
assert(created, "must not initialize twice");
return info;
@@ -479,8 +582,7 @@ KlassSubGraphInfo* HeapShared::init_subgraph_info(Klass* k, bool is_full_module_
KlassSubGraphInfo* HeapShared::get_subgraph_info(Klass* k) {
assert(DumpSharedSpaces, "dump time only");
- Klass* relocated_k = ArchiveBuilder::get_relocated_klass(k);
- KlassSubGraphInfo* info = _dump_time_subgraph_info_table->get(relocated_k);
+ KlassSubGraphInfo* info = _dump_time_subgraph_info_table->get(k);
assert(info != NULL, "must have been initialized");
return info;
}
@@ -641,7 +743,8 @@ struct CopyKlassSubGraphInfoToArchive : StackObj {
(ArchivedKlassSubGraphInfoRecord*)ArchiveBuilder::ro_region_alloc(sizeof(ArchivedKlassSubGraphInfoRecord));
record->init(&info);
- unsigned int hash = SystemDictionaryShared::hash_for_shared_dictionary((address)klass);
+ Klass* relocated_k = ArchiveBuilder::get_relocated_klass(klass);
+ unsigned int hash = SystemDictionaryShared::hash_for_shared_dictionary((address)relocated_k);
u4 delta = ArchiveBuilder::current()->any_to_offset_u4(record);
_writer->add(hash, delta);
}
@@ -903,6 +1006,11 @@ class WalkOopAndArchiveClosure: public BasicOopIterateClosure {
KlassSubGraphInfo* _subgraph_info;
oop _orig_referencing_obj;
oop _archived_referencing_obj;
+
+ // The following are for maintaining a stack for determining
+ // CachedOopInfo::_referrer
+ static WalkOopAndArchiveClosure* _current;
+ WalkOopAndArchiveClosure* _last;
public:
WalkOopAndArchiveClosure(int level,
bool is_closed_archive,
@@ -912,7 +1020,13 @@ class WalkOopAndArchiveClosure: public BasicOopIterateClosure {
_level(level), _is_closed_archive(is_closed_archive),
_record_klasses_only(record_klasses_only),
_subgraph_info(subgraph_info),
- _orig_referencing_obj(orig), _archived_referencing_obj(archived) {}
+ _orig_referencing_obj(orig), _archived_referencing_obj(archived) {
+ _last = _current;
+ _current = this;
+ }
+ ~WalkOopAndArchiveClosure() {
+ _current = _last;
+ }
void do_oop(narrowOop *p) { WalkOopAndArchiveClosure::do_oop_work(p); }
void do_oop( oop *p) { WalkOopAndArchiveClosure::do_oop_work(p); }
@@ -949,8 +1063,26 @@ class WalkOopAndArchiveClosure: public BasicOopIterateClosure {
}
}
}
+
+ public:
+ static WalkOopAndArchiveClosure* current() { return _current; }
+ oop orig_referencing_obj() { return _orig_referencing_obj; }
+ KlassSubGraphInfo* subgraph_info() { return _subgraph_info; }
};
+WalkOopAndArchiveClosure* WalkOopAndArchiveClosure::_current = NULL;
+
+HeapShared::CachedOopInfo HeapShared::make_cached_oop_info(oop orig_obj) {
+ CachedOopInfo info;
+ WalkOopAndArchiveClosure* walker = WalkOopAndArchiveClosure::current();
+
+ info._subgraph_info = (walker == NULL) ? NULL : walker->subgraph_info();
+ info._referrer = (walker == NULL) ? NULL : walker->orig_referencing_obj();
+ info._obj = orig_obj;
+
+ return info;
+}
+
void HeapShared::check_closed_region_object(InstanceKlass* k) {
// Check fields in the object
for (JavaFieldStream fs(k); !fs.done(); fs.next()) {
@@ -1076,6 +1208,8 @@ oop HeapShared::archive_reachable_objects_from(int level,
if (is_closed_archive && orig_k->is_instance_klass()) {
check_closed_region_object(InstanceKlass::cast(orig_k));
}
+
+ check_enum_obj(level + 1, subgraph_info, orig_obj, is_closed_archive);
return archived_obj;
}
diff --git a/src/hotspot/share/cds/heapShared.hpp b/src/hotspot/share/cds/heapShared.hpp
index fc7a0bcb57aec7972eea2b568b63b31cef982d3c..d8fc71fc76e9ba1279401c392363a6cb8f7fcfc4 100644
--- a/src/hotspot/share/cds/heapShared.hpp
+++ b/src/hotspot/share/cds/heapShared.hpp
@@ -25,6 +25,7 @@
#ifndef SHARE_CDS_HEAPSHARED_HPP
#define SHARE_CDS_HEAPSHARED_HPP
+#include "cds/dumpTimeClassInfo.hpp"
#include "cds/metaspaceShared.hpp"
#include "classfile/compactHashtable.hpp"
#include "classfile/javaClasses.hpp"
@@ -43,6 +44,7 @@
#if INCLUDE_CDS_JAVA_HEAP
class DumpedInternedStrings;
class FileMapInfo;
+class KlassSubGraphInfo;
struct ArchivableStaticFieldInfo {
const char* klass_name;
@@ -193,7 +195,7 @@ public:
static bool is_fully_available() {
return is_loaded() || is_mapped();
}
-
+ static bool is_subgraph_root_class(InstanceKlass* ik);
private:
#if INCLUDE_CDS_JAVA_HEAP
static bool _disable_writing;
@@ -228,29 +230,35 @@ public:
assert(is_in_loaded_heap(o), "must be");
}
+ struct CachedOopInfo {
+ KlassSubGraphInfo* _subgraph_info;
+ oop _referrer;
+ oop _obj;
+ CachedOopInfo() :_subgraph_info(), _referrer(), _obj() {}
+ };
+
private:
+ static void check_enum_obj(int level,
+ KlassSubGraphInfo* subgraph_info,
+ oop orig_obj,
+ bool is_closed_archive);
static bool is_in_loaded_heap(uintptr_t o) {
return (_loaded_heap_bottom <= o && o < _loaded_heap_top);
}
- typedef ResourceHashtable ArchivedObjectCache;
static ArchivedObjectCache* _archived_object_cache;
- static unsigned klass_hash(Klass* const& klass) {
- // Generate deterministic hashcode even if SharedBaseAddress is changed due to ASLR.
- return primitive_hash(address(klass) - SharedBaseAddress);
- }
-
class DumpTimeKlassSubGraphInfoTable
: public ResourceHashtable {
+ DumpTimeSharedClassTable_hash> {
public:
int _count;
};
@@ -272,7 +280,7 @@ private:
static RunTimeKlassSubGraphInfoTable _run_time_subgraph_info_table;
static void check_closed_region_object(InstanceKlass* k);
-
+ static CachedOopInfo make_cached_oop_info(oop orig_obj);
static void archive_object_subgraphs(ArchivableStaticFieldInfo fields[],
int num,
bool is_closed_archive,
@@ -482,6 +490,7 @@ private:
static void init_for_dumping(TRAPS) NOT_CDS_JAVA_HEAP_RETURN;
static void write_subgraph_info_table() NOT_CDS_JAVA_HEAP_RETURN;
static void serialize(SerializeClosure* soc) NOT_CDS_JAVA_HEAP_RETURN;
+ static bool initialize_enum_klass(InstanceKlass* k, TRAPS) NOT_CDS_JAVA_HEAP_RETURN_(false);
};
#if INCLUDE_CDS_JAVA_HEAP
diff --git a/src/hotspot/share/cds/runTimeClassInfo.cpp b/src/hotspot/share/cds/runTimeClassInfo.cpp
index 52fa94c119d9c12a05ccd9f02e9b577f6845d157..77ec5de8c3b1e4df55e79a52cabbe1329239b932 100644
--- a/src/hotspot/share/cds/runTimeClassInfo.cpp
+++ b/src/hotspot/share/cds/runTimeClassInfo.cpp
@@ -1,6 +1,5 @@
-
/*
- * Copyright (c) 2021, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2021, 2022, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -64,6 +63,15 @@ void RunTimeClassInfo::init(DumpTimeClassInfo& info) {
InstanceKlass* n_h = info.nest_host();
set_nest_host(n_h);
}
+ if (_klass->has_archived_enum_objs()) {
+ int num = info.num_enum_klass_static_fields();
+ set_num_enum_klass_static_fields(num);
+ for (int i = 0; i < num; i++) {
+ int root_index = info.enum_klass_static_field(i);
+ set_enum_klass_static_field_root_index_at(i, root_index);
+ }
+ }
+
ArchivePtrMarker::mark_pointer(&_klass);
}
diff --git a/src/hotspot/share/cds/runTimeClassInfo.hpp b/src/hotspot/share/cds/runTimeClassInfo.hpp
index adc828c4f88c28a9b3962913ef6e6d4d91a85713..74fdf92ebafa9ae70432bca1c8d656bb52299c8a 100644
--- a/src/hotspot/share/cds/runTimeClassInfo.hpp
+++ b/src/hotspot/share/cds/runTimeClassInfo.hpp
@@ -1,6 +1,5 @@
-
/*
- * Copyright (c) 2021, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2021, 2022, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -64,30 +63,40 @@ public:
return (Symbol*)(SharedBaseAddress + _name);
}
};
+ struct RTEnumKlassStaticFields {
+ int _num;
+ int _root_indices[1];
+ };
InstanceKlass* _klass;
int _num_verifier_constraints;
int _num_loader_constraints;
- // optional CrcInfo _crc; (only for UNREGISTERED classes)
- // optional InstanceKlass* _nest_host
- // optional RTLoaderConstraint _loader_constraint_types[_num_loader_constraints]
- // optional RTVerifierConstraint _verifier_constraints[_num_verifier_constraints]
- // optional char _verifier_constraint_flags[_num_verifier_constraints]
+ // optional CrcInfo _crc; (only for UNREGISTERED classes)
+ // optional InstanceKlass* _nest_host
+ // optional RTLoaderConstraint _loader_constraint_types[_num_loader_constraints]
+ // optional RTVerifierConstraint _verifier_constraints[_num_verifier_constraints]
+ // optional char _verifier_constraint_flags[_num_verifier_constraints]
+ // optional RTEnumKlassStaticFields _enum_klass_static_fields;
private:
static size_t header_size_size() {
- return sizeof(RunTimeClassInfo);
+ return align_up(sizeof(RunTimeClassInfo), wordSize);
}
static size_t verifier_constraints_size(int num_verifier_constraints) {
- return sizeof(RTVerifierConstraint) * num_verifier_constraints;
+ return align_up(sizeof(RTVerifierConstraint) * num_verifier_constraints, wordSize);
}
static size_t verifier_constraint_flags_size(int num_verifier_constraints) {
- return sizeof(char) * num_verifier_constraints;
+ return align_up(sizeof(char) * num_verifier_constraints, wordSize);
}
static size_t loader_constraints_size(int num_loader_constraints) {
- return sizeof(RTLoaderConstraint) * num_loader_constraints;
+ return align_up(sizeof(RTLoaderConstraint) * num_loader_constraints, wordSize);
}
+ static size_t enum_klass_static_fields_size(int num_fields) {
+ size_t size = num_fields <= 0 ? 0 : sizeof(RTEnumKlassStaticFields) + (num_fields - 1) * sizeof(int);
+ return align_up(size, wordSize);
+ }
+
static size_t nest_host_size(InstanceKlass* klass) {
if (klass->is_hidden()) {
return sizeof(InstanceKlass*);
@@ -98,13 +107,15 @@ private:
static size_t crc_size(InstanceKlass* klass);
public:
- static size_t byte_size(InstanceKlass* klass, int num_verifier_constraints, int num_loader_constraints) {
+ static size_t byte_size(InstanceKlass* klass, int num_verifier_constraints, int num_loader_constraints,
+ int num_enum_klass_static_fields) {
return header_size_size() +
crc_size(klass) +
nest_host_size(klass) +
loader_constraints_size(num_loader_constraints) +
verifier_constraints_size(num_verifier_constraints) +
- verifier_constraint_flags_size(num_verifier_constraints);
+ verifier_constraint_flags_size(num_verifier_constraints) +
+ enum_klass_static_fields_size(num_enum_klass_static_fields);
}
private:
@@ -113,7 +124,7 @@ private:
}
size_t nest_host_offset() const {
- return crc_offset() + crc_size(_klass);
+ return crc_offset() + crc_size(_klass);
}
size_t loader_constraints_offset() const {
@@ -125,6 +136,9 @@ private:
size_t verifier_constraint_flags_offset() const {
return verifier_constraints_offset() + verifier_constraints_size(_num_verifier_constraints);
}
+ size_t enum_klass_static_fields_offset() const {
+ return verifier_constraint_flags_offset() + verifier_constraint_flags_size(_num_verifier_constraints);
+ }
void check_verifier_constraint_offset(int i) const {
assert(0 <= i && i < _num_verifier_constraints, "sanity");
@@ -134,6 +148,11 @@ private:
assert(0 <= i && i < _num_loader_constraints, "sanity");
}
+ RTEnumKlassStaticFields* enum_klass_static_fields_addr() const {
+ assert(_klass->has_archived_enum_objs(), "sanity");
+ return (RTEnumKlassStaticFields*)(address(this) + enum_klass_static_fields_offset());
+ }
+
public:
CrcInfo* crc() const {
assert(crc_size(_klass) > 0, "must be");
@@ -187,6 +206,23 @@ public:
return verifier_constraint_flags()[i];
}
+ int num_enum_klass_static_fields(int i) const {
+ return enum_klass_static_fields_addr()->_num;
+ }
+
+ void set_num_enum_klass_static_fields(int num) {
+ enum_klass_static_fields_addr()->_num = num;
+ }
+
+ int enum_klass_static_field_root_index_at(int i) const {
+ assert(0 <= i && i < enum_klass_static_fields_addr()->_num, "must be");
+ return enum_klass_static_fields_addr()->_root_indices[i];
+ }
+
+ void set_enum_klass_static_field_root_index_at(int i, int root_index) {
+ assert(0 <= i && i < enum_klass_static_fields_addr()->_num, "must be");
+ enum_klass_static_fields_addr()->_root_indices[i] = root_index;
+ }
private:
// ArchiveBuilder::make_shallow_copy() has reserved a pointer immediately
// before archived InstanceKlasses. We can use this slot to do a quick
diff --git a/src/hotspot/share/ci/ciStreams.cpp b/src/hotspot/share/ci/ciStreams.cpp
index 1b9b6c7adf85c6895f9352f3df25250deea06892..6c7e9b6ee4171585bed3be470077722a2c77af27 100644
--- a/src/hotspot/share/ci/ciStreams.cpp
+++ b/src/hotspot/share/ci/ciStreams.cpp
@@ -256,6 +256,14 @@ constantTag ciBytecodeStream::get_constant_pool_tag(int index) const {
return _method->get_Method()->constants()->constant_tag_at(index);
}
+// ------------------------------------------------------------------
+// ciBytecodeStream::get_raw_pool_tag
+//
+constantTag ciBytecodeStream::get_raw_pool_tag(int index) const {
+ VM_ENTRY_MARK;
+ return _method->get_Method()->constants()->tag_at(index);
+}
+
// ------------------------------------------------------------------
// ciBytecodeStream::get_basic_type_for_constant_at
//
diff --git a/src/hotspot/share/ci/ciStreams.hpp b/src/hotspot/share/ci/ciStreams.hpp
index e46b2e2bfa21fd5748fd3c6f6fe9ca0febdde9f9..faf3e87cee64f405315777e37c114a0bf9b792e5 100644
--- a/src/hotspot/share/ci/ciStreams.hpp
+++ b/src/hotspot/share/ci/ciStreams.hpp
@@ -230,12 +230,25 @@ public:
constantTag get_constant_pool_tag(int index) const;
BasicType get_basic_type_for_constant_at(int index) const;
+ constantTag get_raw_pool_tag(int index) const;
+
// True if the klass-using bytecode points to an unresolved klass
bool is_unresolved_klass() const {
constantTag tag = get_constant_pool_tag(get_klass_index());
return tag.is_unresolved_klass();
}
+ bool is_dynamic_constant() const {
+ assert(cur_bc() == Bytecodes::_ldc ||
+ cur_bc() == Bytecodes::_ldc_w ||
+ cur_bc() == Bytecodes::_ldc2_w, "not supported: %s", Bytecodes::name(cur_bc()));
+
+ int index = get_constant_pool_index();
+ constantTag tag = get_raw_pool_tag(index);
+ return tag.is_dynamic_constant() ||
+ tag.is_dynamic_constant_in_error();
+ }
+
bool is_in_error() const {
assert(cur_bc() == Bytecodes::_ldc ||
cur_bc() == Bytecodes::_ldc_w ||
diff --git a/src/hotspot/share/classfile/classFileParser.cpp b/src/hotspot/share/classfile/classFileParser.cpp
index 2325241dc64f7213d23b2ea3d0700a54da7a6102..182ed43bedf91a25aa7734f1d87e0f0fc2e41da4 100644
--- a/src/hotspot/share/classfile/classFileParser.cpp
+++ b/src/hotspot/share/classfile/classFileParser.cpp
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 1997, 2021, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 1997, 2022, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -2741,6 +2741,7 @@ Method* ClassFileParser::parse_method(const ClassFileStream* const cfs,
access_flags,
&sizes,
ConstMethod::NORMAL,
+ _cp->symbol_at(name_index),
CHECK_NULL);
ClassLoadingService::add_class_method_size(m->size()*wordSize);
diff --git a/src/hotspot/share/classfile/defaultMethods.cpp b/src/hotspot/share/classfile/defaultMethods.cpp
index 2ef313982d75037a580e7777b7e7667e121b05e1..4d181bdadf0dd1ad204b51c9566602aa648ad6bf 100644
--- a/src/hotspot/share/classfile/defaultMethods.cpp
+++ b/src/hotspot/share/classfile/defaultMethods.cpp
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2012, 2021, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2012, 2022, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -900,7 +900,7 @@ static Method* new_method(
Method* m = Method::allocate(cp->pool_holder()->class_loader_data(),
code_length, flags, &sizes,
- mt, CHECK_NULL);
+ mt, name, CHECK_NULL);
m->set_constants(NULL); // This will get filled in later
m->set_name_index(cp->utf8(name));
diff --git a/src/hotspot/share/classfile/systemDictionary.cpp b/src/hotspot/share/classfile/systemDictionary.cpp
index 195fa979ef996e793e55eca6474a90731efbd088..c3b9b0104431970b4fbc6272c40b9a4631b1b296 100644
--- a/src/hotspot/share/classfile/systemDictionary.cpp
+++ b/src/hotspot/share/classfile/systemDictionary.cpp
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 1997, 2021, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 1997, 2022, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -1559,8 +1559,8 @@ InstanceKlass* SystemDictionary::find_or_define_instance_class(Symbol* class_nam
// ----------------------------------------------------------------------------
-// Update hierachy. This is done before the new klass has been added to the SystemDictionary. The Compile_lock
-// is held, to ensure that the compiler is not using the class hierachy, and that deoptimization will kick in
+// Update hierarchy. This is done before the new klass has been added to the SystemDictionary. The Compile_lock
+// is held, to ensure that the compiler is not using the class hierarchy, and that deoptimization will kick in
// before a new class is used.
void SystemDictionary::add_to_hierarchy(InstanceKlass* k) {
@@ -1574,7 +1574,7 @@ void SystemDictionary::add_to_hierarchy(InstanceKlass* k) {
// The compiler reads the hierarchy outside of the Compile_lock.
// Access ordering is used to add to hierarchy.
- // Link into hierachy.
+ // Link into hierarchy.
k->append_to_sibling_list(); // add to superklass/sibling list
k->process_interfaces(); // handle all "implements" declarations
@@ -1724,7 +1724,7 @@ void SystemDictionary::check_constraints(unsigned int name_hash,
}
}
-// Update class loader data dictionary - done after check_constraint and add_to_hierachy
+// Update class loader data dictionary - done after check_constraint and add_to_hierarchy
// have been called.
void SystemDictionary::update_dictionary(unsigned int hash,
InstanceKlass* k,
@@ -2014,8 +2014,9 @@ Method* SystemDictionary::find_method_handle_intrinsic(vmIntrinsicID iid,
spe = NULL;
// Must create lots of stuff here, but outside of the SystemDictionary lock.
m = Method::make_method_handle_intrinsic(iid, signature, CHECK_NULL);
- if (!Arguments::is_interpreter_only()) {
+ if (!Arguments::is_interpreter_only() || iid == vmIntrinsics::_linkToNative) {
// Generate a compiled form of the MH intrinsic.
+ // linkToNative doesn't have interpreter-specific implementation, so always has to go through compiled version.
AdapterHandlerLibrary::create_native_wrapper(m);
// Check if have the compiled code.
if (!m->has_compiled_code()) {
diff --git a/src/hotspot/share/classfile/systemDictionaryShared.cpp b/src/hotspot/share/classfile/systemDictionaryShared.cpp
index 29b01851ac85651b64f706507591188dcfd443bd..66f9a433dfc097a23fb8909d9aa80900a6da47d2 100644
--- a/src/hotspot/share/classfile/systemDictionaryShared.cpp
+++ b/src/hotspot/share/classfile/systemDictionaryShared.cpp
@@ -793,6 +793,13 @@ bool SystemDictionaryShared::add_verification_constraint(InstanceKlass* k, Symbo
}
}
+void SystemDictionaryShared::add_enum_klass_static_field(InstanceKlass* ik, int root_index) {
+ assert(DumpSharedSpaces, "static dump only");
+ DumpTimeClassInfo* info = SystemDictionaryShared::find_or_allocate_info_for_locked(ik);
+ assert(info != NULL, "must be");
+ info->add_enum_klass_static_field(root_index);
+}
+
void SystemDictionaryShared::add_to_dump_time_lambda_proxy_class_dictionary(LambdaProxyClassKey& key,
InstanceKlass* proxy_klass) {
assert_lock_strong(DumpTimeTable_lock);
@@ -1174,7 +1181,7 @@ public:
bool do_entry(InstanceKlass* k, DumpTimeClassInfo& info) {
if (!info.is_excluded()) {
- size_t byte_size = RunTimeClassInfo::byte_size(info._klass, info.num_verifier_constraints(), info.num_loader_constraints());
+ size_t byte_size = info.runtime_info_bytesize();
_shared_class_info_size += align_up(byte_size, SharedSpaceObjectAlignment);
}
return true; // keep on iterating
@@ -1283,7 +1290,7 @@ public:
bool do_entry(InstanceKlass* k, DumpTimeClassInfo& info) {
if (!info.is_excluded() && info.is_builtin() == _is_builtin) {
- size_t byte_size = RunTimeClassInfo::byte_size(info._klass, info.num_verifier_constraints(), info.num_loader_constraints());
+ size_t byte_size = info.runtime_info_bytesize();
RunTimeClassInfo* record;
record = (RunTimeClassInfo*)ArchiveBuilder::ro_region_alloc(byte_size);
record->init(info);
diff --git a/src/hotspot/share/classfile/systemDictionaryShared.hpp b/src/hotspot/share/classfile/systemDictionaryShared.hpp
index a05484b429593149eaa55ec51dfe28f0dc830f91..0dbbd486dc37b92c39637f29eb360fc0c9265b7b 100644
--- a/src/hotspot/share/classfile/systemDictionaryShared.hpp
+++ b/src/hotspot/share/classfile/systemDictionaryShared.hpp
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2014, 2021, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2014, 2022, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -193,6 +193,9 @@ private:
public:
static bool is_hidden_lambda_proxy(InstanceKlass* ik);
static bool is_early_klass(InstanceKlass* k); // Was k loaded while JvmtiExport::is_early_phase()==true
+ static bool has_archived_enum_objs(InstanceKlass* ik);
+ static void set_has_archived_enum_objs(InstanceKlass* ik);
+
static InstanceKlass* find_builtin_class(Symbol* class_name);
static const RunTimeClassInfo* find_record(RunTimeSharedDictionary* static_dict,
@@ -243,6 +246,7 @@ public:
bool from_is_array, bool from_is_object) NOT_CDS_RETURN_(false);
static void check_verification_constraints(InstanceKlass* klass,
TRAPS) NOT_CDS_RETURN;
+ static void add_enum_klass_static_field(InstanceKlass* ik, int root_index);
static void set_class_has_failed_verification(InstanceKlass* ik) NOT_CDS_RETURN;
static bool has_class_failed_verification(InstanceKlass* ik) NOT_CDS_RETURN_(false);
static void add_lambda_proxy_class(InstanceKlass* caller_ik,
diff --git a/src/hotspot/share/classfile/vmClassMacros.hpp b/src/hotspot/share/classfile/vmClassMacros.hpp
index 357e5538809fb83c2333608e1596922785a5e6e6..2879076004b5ad7a2d6f20593cf86f683997f3be 100644
--- a/src/hotspot/share/classfile/vmClassMacros.hpp
+++ b/src/hotspot/share/classfile/vmClassMacros.hpp
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2021, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2021, 2022, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -136,6 +136,7 @@
do_klass(ByteArrayInputStream_klass, java_io_ByteArrayInputStream ) \
do_klass(URL_klass, java_net_URL ) \
do_klass(URLClassLoader_klass, java_net_URLClassLoader ) \
+ do_klass(Enum_klass, java_lang_Enum ) \
do_klass(Jar_Manifest_klass, java_util_jar_Manifest ) \
do_klass(jdk_internal_loader_BuiltinClassLoader_klass,jdk_internal_loader_BuiltinClassLoader ) \
do_klass(jdk_internal_loader_ClassLoaders_klass, jdk_internal_loader_ClassLoaders ) \
diff --git a/src/hotspot/share/classfile/vmIntrinsics.cpp b/src/hotspot/share/classfile/vmIntrinsics.cpp
index cc3dc1ebdccf58fc689140cfcc02f432ec07dfef..a329669bed3d25b0a9284b4cce4482581faa9754 100644
--- a/src/hotspot/share/classfile/vmIntrinsics.cpp
+++ b/src/hotspot/share/classfile/vmIntrinsics.cpp
@@ -229,7 +229,7 @@ bool vmIntrinsics::disabled_by_jvm_flags(vmIntrinsics::ID id) {
case vmIntrinsics::_loadFence:
case vmIntrinsics::_storeFence:
case vmIntrinsics::_fullFence:
- case vmIntrinsics::_hasNegatives:
+ case vmIntrinsics::_countPositives:
case vmIntrinsics::_Reference_get:
break;
default:
diff --git a/src/hotspot/share/classfile/vmIntrinsics.hpp b/src/hotspot/share/classfile/vmIntrinsics.hpp
index 7c3cb1d3f10235bcbb23d3363b930f98953a647d..5b2c6a9ce5610124d7de545ed690eb57dc9ea087 100644
--- a/src/hotspot/share/classfile/vmIntrinsics.hpp
+++ b/src/hotspot/share/classfile/vmIntrinsics.hpp
@@ -354,9 +354,9 @@ class methodHandle;
do_signature(Preconditions_checkLongIndex_signature, "(JJLjava/util/function/BiFunction;)J") \
\
do_class(java_lang_StringCoding, "java/lang/StringCoding") \
- do_intrinsic(_hasNegatives, java_lang_StringCoding, hasNegatives_name, hasNegatives_signature, F_S) \
- do_name( hasNegatives_name, "hasNegatives") \
- do_signature(hasNegatives_signature, "([BII)Z") \
+ do_intrinsic(_countPositives, java_lang_StringCoding, countPositives_name, countPositives_signature, F_S) \
+ do_name( countPositives_name, "countPositives") \
+ do_signature(countPositives_signature, "([BII)I") \
\
do_class(sun_nio_cs_iso8859_1_Encoder, "sun/nio/cs/ISO_8859_1$Encoder") \
do_intrinsic(_encodeISOArray, sun_nio_cs_iso8859_1_Encoder, encodeISOArray_name, encodeISOArray_signature, F_S) \
@@ -459,9 +459,8 @@ class methodHandle;
\
/* support for sun.security.provider.DigestBase */ \
do_class(sun_security_provider_digestbase, "sun/security/provider/DigestBase") \
- do_intrinsic(_digestBase_implCompressMB, sun_security_provider_digestbase, implCompressMB_name, implCompressMB_signature, F_R) \
+ do_intrinsic(_digestBase_implCompressMB, sun_security_provider_digestbase, implCompressMB_name, countPositives_signature, F_R) \
do_name( implCompressMB_name, "implCompressMultiBlock0") \
- do_signature(implCompressMB_signature, "([BII)I") \
\
/* support for java.util.Base64.Encoder*/ \
do_class(java_util_Base64_Encoder, "java/util/Base64$Encoder") \
diff --git a/src/hotspot/share/classfile/vmSymbols.hpp b/src/hotspot/share/classfile/vmSymbols.hpp
index 65acd172f685ff9ad9dcd33e3f6331a6d74cbee2..e0402392467e2b9c9a19196798b5d755e569844e 100644
--- a/src/hotspot/share/classfile/vmSymbols.hpp
+++ b/src/hotspot/share/classfile/vmSymbols.hpp
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 1997, 2021, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 1997, 2022, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -700,6 +700,7 @@
template(dumpSharedArchive_signature, "(ZLjava/lang/String;)Ljava/lang/String;") \
template(generateLambdaFormHolderClasses, "generateLambdaFormHolderClasses") \
template(generateLambdaFormHolderClasses_signature, "([Ljava/lang/String;)[Ljava/lang/Object;") \
+ template(java_lang_Enum, "java/lang/Enum") \
template(java_lang_invoke_Invokers_Holder, "java/lang/invoke/Invokers$Holder") \
template(java_lang_invoke_DirectMethodHandle_Holder, "java/lang/invoke/DirectMethodHandle$Holder") \
template(java_lang_invoke_LambdaForm_Holder, "java/lang/invoke/LambdaForm$Holder") \
diff --git a/src/hotspot/share/code/codeCache.cpp b/src/hotspot/share/code/codeCache.cpp
index 278792f2bc76619c9e79c9bcba3013dd82194023..0c1a579ea341c0658445dff0623f7290b6e484ec 100644
--- a/src/hotspot/share/code/codeCache.cpp
+++ b/src/hotspot/share/code/codeCache.cpp
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 1997, 2021, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 1997, 2022, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -101,22 +101,37 @@ class CodeBlob_sizes {
scopes_pcs_size = 0;
}
- int total() { return total_size; }
- bool is_empty() { return count == 0; }
+ int total() const { return total_size; }
+ bool is_empty() const { return count == 0; }
- void print(const char* title) {
- tty->print_cr(" #%d %s = %dK (hdr %d%%, loc %d%%, code %d%%, stub %d%%, [oops %d%%, metadata %d%%, data %d%%, pcs %d%%])",
- count,
- title,
- (int)(total() / K),
- header_size * 100 / total_size,
- relocation_size * 100 / total_size,
- code_size * 100 / total_size,
- stub_size * 100 / total_size,
- scopes_oop_size * 100 / total_size,
- scopes_metadata_size * 100 / total_size,
- scopes_data_size * 100 / total_size,
- scopes_pcs_size * 100 / total_size);
+ void print(const char* title) const {
+ if (is_empty()) {
+ tty->print_cr(" #%d %s = %dK",
+ count,
+ title,
+ total() / (int)K);
+ } else {
+ tty->print_cr(" #%d %s = %dK (hdr %dK %d%%, loc %dK %d%%, code %dK %d%%, stub %dK %d%%, [oops %dK %d%%, metadata %dK %d%%, data %dK %d%%, pcs %dK %d%%])",
+ count,
+ title,
+ total() / (int)K,
+ header_size / (int)K,
+ header_size * 100 / total_size,
+ relocation_size / (int)K,
+ relocation_size * 100 / total_size,
+ code_size / (int)K,
+ code_size * 100 / total_size,
+ stub_size / (int)K,
+ stub_size * 100 / total_size,
+ scopes_oop_size / (int)K,
+ scopes_oop_size * 100 / total_size,
+ scopes_metadata_size / (int)K,
+ scopes_metadata_size * 100 / total_size,
+ scopes_data_size / (int)K,
+ scopes_data_size * 100 / total_size,
+ scopes_pcs_size / (int)K,
+ scopes_pcs_size * 100 / total_size);
+ }
}
void add(CodeBlob* cb) {
@@ -353,7 +368,7 @@ bool CodeCache::heap_available(int code_blob_type) {
if (!SegmentedCodeCache) {
// No segmentation: use a single code heap
return (code_blob_type == CodeBlobType::All);
- } else if (Arguments::is_interpreter_only()) {
+ } else if (CompilerConfig::is_interpreter_only()) {
// Interpreter only: we don't need any method code heaps
return (code_blob_type == CodeBlobType::NonNMethod);
} else if (CompilerConfig::is_c1_profiling()) {
@@ -487,7 +502,7 @@ CodeBlob* CodeCache::next_blob(CodeHeap* heap, CodeBlob* cb) {
*/
CodeBlob* CodeCache::allocate(int size, int code_blob_type, bool handle_alloc_failure, int orig_code_blob_type) {
// Possibly wakes up the sweeper thread.
- NMethodSweeper::report_allocation(code_blob_type);
+ NMethodSweeper::report_allocation();
assert_locked_or_safepoint(CodeCache_lock);
assert(size > 0, "Code cache allocation request must be > 0 but is %d", size);
if (size <= 0) {
@@ -512,7 +527,7 @@ CodeBlob* CodeCache::allocate(int size, int code_blob_type, bool handle_alloc_fa
// Fallback solution: Try to store code in another code heap.
// NonNMethod -> MethodNonProfiled -> MethodProfiled (-> MethodNonProfiled)
// Note that in the sweeper, we check the reverse_free_ratio of the code heap
- // and force stack scanning if less than 10% of the code heap are free.
+ // and force stack scanning if less than 10% of the entire code cache are free.
int type = code_blob_type;
switch (type) {
case CodeBlobType::NonNMethod:
@@ -889,20 +904,17 @@ size_t CodeCache::max_capacity() {
return max_cap;
}
-/**
- * Returns the reverse free ratio. E.g., if 25% (1/4) of the code heap
- * is free, reverse_free_ratio() returns 4.
- */
-double CodeCache::reverse_free_ratio(int code_blob_type) {
- CodeHeap* heap = get_code_heap(code_blob_type);
- if (heap == NULL) {
- return 0;
- }
- double unallocated_capacity = MAX2((double)heap->unallocated_capacity(), 1.0); // Avoid division by 0;
- double max_capacity = (double)heap->max_capacity();
- double result = max_capacity / unallocated_capacity;
- assert (max_capacity >= unallocated_capacity, "Must be");
+// Returns the reverse free ratio. E.g., if 25% (1/4) of the code cache
+// is free, reverse_free_ratio() returns 4.
+// Since code heap for each type of code blobs falls forward to the next
+// type of code heap, return the reverse free ratio for the entire
+// code cache.
+double CodeCache::reverse_free_ratio() {
+ double unallocated = MAX2((double)unallocated_capacity(), 1.0); // Avoid division by 0;
+ double max = (double)max_capacity();
+ double result = max / unallocated;
+ assert (max >= unallocated, "Must be");
assert (result >= 1.0, "reverse_free_ratio must be at least 1. It is %f", result);
return result;
}
@@ -1226,9 +1238,9 @@ void CodeCache::report_codemem_full(int code_blob_type, bool print) {
CodeHeap* heap = get_code_heap(code_blob_type);
assert(heap != NULL, "heap is null");
- heap->report_full();
+ int full_count = heap->report_full();
- if ((heap->full_count() == 1) || print) {
+ if ((full_count == 1) || print) {
// Not yet reported for this heap, report
if (SegmentedCodeCache) {
ResourceMark rm;
@@ -1265,7 +1277,7 @@ void CodeCache::report_codemem_full(int code_blob_type, bool print) {
tty->print("%s", s.as_string());
}
- if (heap->full_count() == 1) {
+ if (full_count == 1) {
if (PrintCodeHeapAnalytics) {
CompileBroker::print_heapinfo(tty, "all", 4096); // details, may be a lot!
}
@@ -1430,27 +1442,73 @@ void CodeCache::print() {
#ifndef PRODUCT
if (!Verbose) return;
- CodeBlob_sizes live;
- CodeBlob_sizes dead;
+ CodeBlob_sizes live[CompLevel_full_optimization + 1];
+ CodeBlob_sizes dead[CompLevel_full_optimization + 1];
+ CodeBlob_sizes runtimeStub;
+ CodeBlob_sizes uncommonTrapStub;
+ CodeBlob_sizes deoptimizationStub;
+ CodeBlob_sizes adapter;
+ CodeBlob_sizes bufferBlob;
+ CodeBlob_sizes other;
FOR_ALL_ALLOCABLE_HEAPS(heap) {
FOR_ALL_BLOBS(cb, *heap) {
- if (!cb->is_alive()) {
- dead.add(cb);
+ if (cb->is_nmethod()) {
+ const int level = cb->as_nmethod()->comp_level();
+ assert(0 <= level && level <= CompLevel_full_optimization, "Invalid compilation level");
+ if (!cb->is_alive()) {
+ dead[level].add(cb);
+ } else {
+ live[level].add(cb);
+ }
+ } else if (cb->is_runtime_stub()) {
+ runtimeStub.add(cb);
+ } else if (cb->is_deoptimization_stub()) {
+ deoptimizationStub.add(cb);
+ } else if (cb->is_uncommon_trap_stub()) {
+ uncommonTrapStub.add(cb);
+ } else if (cb->is_adapter_blob()) {
+ adapter.add(cb);
+ } else if (cb->is_buffer_blob()) {
+ bufferBlob.add(cb);
} else {
- live.add(cb);
+ other.add(cb);
}
}
}
- tty->print_cr("CodeCache:");
tty->print_cr("nmethod dependency checking time %fs", dependentCheckTime.seconds());
- if (!live.is_empty()) {
- live.print("live");
- }
- if (!dead.is_empty()) {
- dead.print("dead");
+ tty->print_cr("nmethod blobs per compilation level:");
+ for (int i = 0; i <= CompLevel_full_optimization; i++) {
+ const char *level_name;
+ switch (i) {
+ case CompLevel_none: level_name = "none"; break;
+ case CompLevel_simple: level_name = "simple"; break;
+ case CompLevel_limited_profile: level_name = "limited profile"; break;
+ case CompLevel_full_profile: level_name = "full profile"; break;
+ case CompLevel_full_optimization: level_name = "full optimization"; break;
+ default: assert(false, "invalid compilation level");
+ }
+ tty->print_cr("%s:", level_name);
+ live[i].print("live");
+ dead[i].print("dead");
+ }
+
+ struct {
+ const char* name;
+ const CodeBlob_sizes* sizes;
+ } non_nmethod_blobs[] = {
+ { "runtime", &runtimeStub },
+ { "uncommon trap", &uncommonTrapStub },
+ { "deoptimization", &deoptimizationStub },
+ { "adapter", &adapter },
+ { "buffer blob", &bufferBlob },
+ { "other", &other },
+ };
+ tty->print_cr("Non-nmethod blobs:");
+ for (auto& blob: non_nmethod_blobs) {
+ blob.sizes->print(blob.name);
}
if (WizardMode) {
diff --git a/src/hotspot/share/code/codeCache.hpp b/src/hotspot/share/code/codeCache.hpp
index 53705aadcbe3639aca98e21beeacb1cd9d2d5e8c..0a0bd9c770402e450dc51330033a0f7dbb840b44 100644
--- a/src/hotspot/share/code/codeCache.hpp
+++ b/src/hotspot/share/code/codeCache.hpp
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 1997, 2021, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 1997, 2022, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -211,7 +211,7 @@ class CodeCache : AllStatic {
static size_t unallocated_capacity();
static size_t max_capacity();
- static double reverse_free_ratio(int code_blob_type);
+ static double reverse_free_ratio();
static void clear_inline_caches(); // clear all inline caches
static void cleanup_inline_caches(); // clean unloaded/zombie nmethods from inline caches
diff --git a/src/hotspot/share/compiler/compilationPolicy.cpp b/src/hotspot/share/compiler/compilationPolicy.cpp
index 592cb5d65f2f400feb94fd75bced2012a7da7ffd..027b513af695544695ff82d4458bf469abe45ae2 100644
--- a/src/hotspot/share/compiler/compilationPolicy.cpp
+++ b/src/hotspot/share/compiler/compilationPolicy.cpp
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2010, 2021, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2010, 2022, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -315,7 +315,7 @@ double CompilationPolicy::threshold_scale(CompLevel level, int feedback_k) {
// The main intention is to keep enough free space for C2 compiled code
// to achieve peak performance if the code cache is under stress.
if (CompilerConfig::is_tiered() && !CompilationModeFlag::disable_intermediate() && is_c1_compile(level)) {
- double current_reverse_free_ratio = CodeCache::reverse_free_ratio(CodeCache::get_code_blob_type(level));
+ double current_reverse_free_ratio = CodeCache::reverse_free_ratio();
if (current_reverse_free_ratio > _increase_threshold_at_ratio) {
k *= exp(current_reverse_free_ratio - _increase_threshold_at_ratio);
}
diff --git a/src/hotspot/share/compiler/compileBroker.cpp b/src/hotspot/share/compiler/compileBroker.cpp
index 1c8656044b57b3a352b4422ea0cd219173b373f2..6422b719f68de41530e3139ea55b113f510414e8 100644
--- a/src/hotspot/share/compiler/compileBroker.cpp
+++ b/src/hotspot/share/compiler/compileBroker.cpp
@@ -225,11 +225,13 @@ class CompilationLog : public StringEventLog {
}
void log_metaspace_failure(const char* reason) {
+ // Note: This method can be called from non-Java/compiler threads to
+ // log the global metaspace failure that might affect profiling.
ResourceMark rm;
StringLogMessage lm;
lm.print("%4d COMPILE PROFILING SKIPPED: %s", -1, reason);
lm.print("\n");
- log(JavaThread::current(), "%s", (const char*)lm);
+ log(Thread::current(), "%s", (const char*)lm);
}
};
diff --git a/src/hotspot/share/compiler/compilerDefinitions.cpp b/src/hotspot/share/compiler/compilerDefinitions.cpp
index a6445c161b2f3593ae2203f4b62bde89c14d4428..aa8dd0a1be8638c9bc4519177af24624da36744c 100644
--- a/src/hotspot/share/compiler/compilerDefinitions.cpp
+++ b/src/hotspot/share/compiler/compilerDefinitions.cpp
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2016, 2021, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2016, 2022, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -310,7 +310,6 @@ void CompilerConfig::set_compilation_policy_flags() {
}
}
-
if (CompileThresholdScaling < 0) {
vm_exit_during_initialization("Negative value specified for CompileThresholdScaling", NULL);
}
@@ -509,6 +508,10 @@ bool CompilerConfig::check_args_consistency(bool status) {
}
FLAG_SET_CMDLINE(TieredCompilation, false);
}
+ if (SegmentedCodeCache) {
+ warning("SegmentedCodeCache has no meaningful effect with -Xint");
+ FLAG_SET_DEFAULT(SegmentedCodeCache, false);
+ }
#if INCLUDE_JVMCI
if (EnableJVMCI) {
if (!FLAG_IS_DEFAULT(EnableJVMCI) || !FLAG_IS_DEFAULT(UseJVMCICompiler)) {
diff --git a/src/hotspot/share/compiler/compilerDirectives.hpp b/src/hotspot/share/compiler/compilerDirectives.hpp
index 1e1bf850ac36b00f49ae437e0311963ec0ab50f2..bab28e2458ae0c5bd4bd120cf006fd617dc856ca 100644
--- a/src/hotspot/share/compiler/compilerDirectives.hpp
+++ b/src/hotspot/share/compiler/compilerDirectives.hpp
@@ -66,6 +66,7 @@
cflags(PrintIntrinsics, bool, PrintIntrinsics, PrintIntrinsics) \
NOT_PRODUCT(cflags(TraceOptoPipelining, bool, TraceOptoPipelining, TraceOptoPipelining)) \
NOT_PRODUCT(cflags(TraceOptoOutput, bool, TraceOptoOutput, TraceOptoOutput)) \
+NOT_PRODUCT(cflags(TraceEscapeAnalysis, bool, false, TraceEscapeAnalysis)) \
NOT_PRODUCT(cflags(PrintIdeal, bool, PrintIdeal, PrintIdeal)) \
NOT_PRODUCT(cflags(PrintIdealPhase, ccstrlist, "", PrintIdealPhase)) \
cflags(TraceSpilling, bool, TraceSpilling, TraceSpilling) \
diff --git a/src/hotspot/share/compiler/compilerOracle.hpp b/src/hotspot/share/compiler/compilerOracle.hpp
index ab941b01d88b5d755ea0edc03e7f705a0587e788..303c52683f543e435b3eab1fbdd03d3d2d05b7a3 100644
--- a/src/hotspot/share/compiler/compilerOracle.hpp
+++ b/src/hotspot/share/compiler/compilerOracle.hpp
@@ -79,6 +79,7 @@ class methodHandle;
option(TraceOptoPipelining, "TraceOptoPipelining", Bool) \
option(TraceOptoOutput, "TraceOptoOutput", Bool) \
option(TraceSpilling, "TraceSpilling", Bool) \
+NOT_PRODUCT(option(TraceEscapeAnalysis, "TraceEscapeAnalysis", Bool)) \
NOT_PRODUCT(option(PrintIdeal, "PrintIdeal", Bool)) \
NOT_PRODUCT(option(PrintIdealPhase, "PrintIdealPhase", Ccstrlist)) \
NOT_PRODUCT(option(IGVPrintLevel, "IGVPrintLevel", Intx)) \
diff --git a/src/hotspot/share/gc/g1/g1BlockOffsetTable.inline.hpp b/src/hotspot/share/gc/g1/g1BlockOffsetTable.inline.hpp
index f70cb118627225f646cb43345c44b4a5a37cb085..058a9f58785dfd978e02ddd5dbab961159e75276 100644
--- a/src/hotspot/share/gc/g1/g1BlockOffsetTable.inline.hpp
+++ b/src/hotspot/share/gc/g1/g1BlockOffsetTable.inline.hpp
@@ -140,7 +140,7 @@ inline HeapWord* G1BlockOffsetTablePart::forward_to_block_containing_addr(HeapWo
"start of block must be an initialized object");
n += block_size(q);
}
- assert(q <= n, "wrong order for q and addr");
+ assert(q <= addr, "wrong order for q and addr");
assert(addr < n, "wrong order for addr and n");
return q;
}
diff --git a/src/hotspot/share/gc/g1/g1CardSet.cpp b/src/hotspot/share/gc/g1/g1CardSet.cpp
index ad886b1e7e6baa7c3016605dd19926336aeecb23..82092a1d5020c9d17b5af25c947757365ce4e240 100644
--- a/src/hotspot/share/gc/g1/g1CardSet.cpp
+++ b/src/hotspot/share/gc/g1/g1CardSet.cpp
@@ -26,25 +26,19 @@
#include "gc/g1/g1CardSet.inline.hpp"
#include "gc/g1/g1CardSetContainers.inline.hpp"
#include "gc/g1/g1CardSetMemory.inline.hpp"
-#include "gc/g1/g1FromCardCache.hpp"
#include "gc/g1/heapRegion.inline.hpp"
+#include "gc/shared/gcLogPrecious.hpp"
+#include "gc/shared/gcTraceTime.inline.hpp"
#include "memory/allocation.inline.hpp"
#include "runtime/atomic.hpp"
#include "runtime/globals_extension.hpp"
-#include "runtime/mutex.hpp"
#include "utilities/bitMap.inline.hpp"
#include "utilities/concurrentHashTable.inline.hpp"
#include "utilities/globalDefinitions.hpp"
-#include "utilities/lockFreeStack.hpp"
-#include "utilities/spinYield.hpp"
-
-#include "gc/shared/gcLogPrecious.hpp"
-#include "gc/shared/gcTraceTime.inline.hpp"
-#include "runtime/java.hpp"
-G1CardSet::CardSetPtr G1CardSet::FullCardSet = (G1CardSet::CardSetPtr)-1;
+G1CardSet::ContainerPtr G1CardSet::FullCardSet = (G1CardSet::ContainerPtr)-1;
-static uint default_log2_card_region_per_region() {
+static uint default_log2_card_regions_per_region() {
uint log2_card_regions_per_heap_region = 0;
const uint card_container_limit = G1CardSetContainer::LogCardsPerRegionLimit;
@@ -62,7 +56,7 @@ G1CardSetConfiguration::G1CardSetConfiguration() :
G1RemSetHowlNumBuckets, /* num_buckets_in_howl */
(double)G1RemSetCoarsenHowlToFullPercent / 100, /* cards_in_howl_threshold_percent */
(uint)HeapRegion::CardsPerRegion, /* max_cards_in_cardset */
- default_log2_card_region_per_region()) /* log2_card_region_per_region */
+ default_log2_card_regions_per_region()) /* log2_card_regions_per_region */
{
assert((_log2_card_regions_per_heap_region + _log2_cards_per_card_region) == (uint)HeapRegion::LogCardsPerRegion,
"inconsistent heap region virtualization setup");
@@ -73,7 +67,7 @@ G1CardSetConfiguration::G1CardSetConfiguration(uint max_cards_in_array,
uint max_buckets_in_howl,
double cards_in_howl_threshold_percent,
uint max_cards_in_card_set,
- uint log2_card_region_per_region) :
+ uint log2_card_regions_per_region) :
G1CardSetConfiguration(log2i_exact(max_cards_in_card_set), /* inline_ptr_bits_per_card */
max_cards_in_array, /* max_cards_in_array */
cards_in_bitmap_threshold_percent, /* cards_in_bitmap_threshold_percent */
@@ -82,7 +76,7 @@ G1CardSetConfiguration::G1CardSetConfiguration(uint max_cards_in_array,
max_buckets_in_howl),
cards_in_howl_threshold_percent, /* cards_in_howl_threshold_percent */
max_cards_in_card_set, /* max_cards_in_cardset */
- log2_card_region_per_region)
+ log2_card_regions_per_region)
{ }
G1CardSetConfiguration::G1CardSetConfiguration(uint inline_ptr_bits_per_card,
@@ -197,7 +191,7 @@ void G1CardSetCoarsenStats::print_on(outputStream* out) {
}
class G1CardSetHashTable : public CHeapObj {
- using CardSetPtr = G1CardSet::CardSetPtr;
+ using ContainerPtr = G1CardSet::ContainerPtr;
// Did we insert at least one card in the table?
bool volatile _inserted_card;
@@ -231,12 +225,12 @@ class G1CardSetHashTable : public CHeapObj {
};
class G1CardSetHashTableScan : public StackObj {
- G1CardSet::CardSetPtrClosure* _scan_f;
+ G1CardSet::ContainerPtrClosure* _scan_f;
public:
- explicit G1CardSetHashTableScan(G1CardSet::CardSetPtrClosure* f) : _scan_f(f) { }
+ explicit G1CardSetHashTableScan(G1CardSet::ContainerPtrClosure* f) : _scan_f(f) { }
bool operator()(G1CardSetHashTableValue* value) {
- _scan_f->do_cardsetptr(value->_region_idx, value->_num_occupied, value->_card_set);
+ _scan_f->do_containerptr(value->_region_idx, value->_num_occupied, value->_container);
return true;
}
};
@@ -284,19 +278,19 @@ public:
return found.value();
}
- void iterate_safepoint(G1CardSet::CardSetPtrClosure* cl2) {
+ void iterate_safepoint(G1CardSet::ContainerPtrClosure* cl2) {
G1CardSetHashTableScan cl(cl2);
_table.do_safepoint_scan(cl);
}
- void iterate(G1CardSet::CardSetPtrClosure* cl2) {
+ void iterate(G1CardSet::ContainerPtrClosure* cl2) {
G1CardSetHashTableScan cl(cl2);
_table.do_scan(Thread::current(), cl);
}
void reset() {
if (Atomic::load(&_inserted_card)) {
- _table.unsafe_reset(InitialLogTableSize);
+ _table.unsafe_reset(InitialLogTableSize);
Atomic::store(&_inserted_card, false);
}
}
@@ -343,101 +337,93 @@ G1CardSet::~G1CardSet() {
_mm->flush();
}
-uint G1CardSet::card_set_type_to_mem_object_type(uintptr_t type) const {
- assert(type == G1CardSet::CardSetArrayOfCards ||
- type == G1CardSet::CardSetBitMap ||
- type == G1CardSet::CardSetHowl, "should not allocate card set type %zu", type);
+uint G1CardSet::container_type_to_mem_object_type(uintptr_t type) const {
+ assert(type == G1CardSet::ContainerArrayOfCards ||
+ type == G1CardSet::ContainerBitMap ||
+ type == G1CardSet::ContainerHowl, "should not allocate container type %zu", type);
return (uint)type;
}
uint8_t* G1CardSet::allocate_mem_object(uintptr_t type) {
- return _mm->allocate(card_set_type_to_mem_object_type(type));
+ return _mm->allocate(container_type_to_mem_object_type(type));
}
-void G1CardSet::free_mem_object(CardSetPtr card_set) {
- assert(card_set != G1CardSet::FreeCardSet, "should not free Free card set");
- assert(card_set != G1CardSet::FullCardSet, "should not free Full card set");
+void G1CardSet::free_mem_object(ContainerPtr container) {
+ assert(container != G1CardSet::FreeCardSet, "should not free container FreeCardSet");
+ assert(container != G1CardSet::FullCardSet, "should not free container FullCardSet");
- uintptr_t type = card_set_type(card_set);
- void* value = strip_card_set_type(card_set);
+ uintptr_t type = container_type(container);
+ void* value = strip_container_type(container);
- assert(type == G1CardSet::CardSetArrayOfCards ||
- type == G1CardSet::CardSetBitMap ||
- type == G1CardSet::CardSetHowl, "should not free card set type %zu", type);
+ assert(type == G1CardSet::ContainerArrayOfCards ||
+ type == G1CardSet::ContainerBitMap ||
+ type == G1CardSet::ContainerHowl, "should not free card set type %zu", type);
+ assert(static_cast(value)->refcount() == 1, "must be");
-#ifdef ASSERT
- if (type == G1CardSet::CardSetArrayOfCards ||
- type == G1CardSet::CardSetBitMap ||
- type == G1CardSet::CardSetHowl) {
- G1CardSetContainer* card_set = (G1CardSetContainer*)value;
- assert((card_set->refcount() == 1), "must be");
- }
-#endif
-
- _mm->free(card_set_type_to_mem_object_type(type), value);
+ _mm->free(container_type_to_mem_object_type(type), value);
}
-G1CardSet::CardSetPtr G1CardSet::acquire_card_set(CardSetPtr volatile* card_set_addr) {
+G1CardSet::ContainerPtr G1CardSet::acquire_container(ContainerPtr volatile* container_addr) {
// Update reference counts under RCU critical section to avoid a
// use-after-cleapup bug where we increment a reference count for
// an object whose memory has already been cleaned up and reused.
GlobalCounter::CriticalSection cs(Thread::current());
while (true) {
- // Get cardsetptr and increment refcount atomically wrt to memory reuse.
- CardSetPtr card_set = Atomic::load_acquire(card_set_addr);
- uint cs_type = card_set_type(card_set);
- if (card_set == FullCardSet || cs_type == CardSetInlinePtr) {
- return card_set;
+ // Get ContainerPtr and increment refcount atomically wrt to memory reuse.
+ ContainerPtr container = Atomic::load_acquire(container_addr);
+ uint cs_type = container_type(container);
+ if (container == FullCardSet || cs_type == ContainerInlinePtr) {
+ return container;
}
- G1CardSetContainer* card_set_on_heap = (G1CardSetContainer*)strip_card_set_type(card_set);
+ G1CardSetContainer* container_on_heap = (G1CardSetContainer*)strip_container_type(container);
- if (card_set_on_heap->try_increment_refcount()) {
- assert(card_set_on_heap->refcount() >= 3, "Smallest value is 3");
- return card_set;
+ if (container_on_heap->try_increment_refcount()) {
+ assert(container_on_heap->refcount() >= 3, "Smallest value is 3");
+ return container;
}
}
}
-bool G1CardSet::release_card_set(CardSetPtr card_set) {
- uint cs_type = card_set_type(card_set);
- if (card_set == FullCardSet || cs_type == CardSetInlinePtr) {
+bool G1CardSet::release_container(ContainerPtr container) {
+ uint cs_type = container_type(container);
+ if (container == FullCardSet || cs_type == ContainerInlinePtr) {
return false;
}
- G1CardSetContainer* card_set_on_heap = (G1CardSetContainer*)strip_card_set_type(card_set);
- return card_set_on_heap->decrement_refcount() == 1;
+ G1CardSetContainer* container_on_heap = (G1CardSetContainer*)strip_container_type(container);
+ return container_on_heap->decrement_refcount() == 1;
}
-void G1CardSet::release_and_maybe_free_card_set(CardSetPtr card_set) {
- if (release_card_set(card_set)) {
- free_mem_object(card_set);
+void G1CardSet::release_and_maybe_free_container(ContainerPtr container) {
+ if (release_container(container)) {
+ free_mem_object(container);
}
}
-void G1CardSet::release_and_must_free_card_set(CardSetPtr card_set) {
- bool should_free = release_card_set(card_set);
+void G1CardSet::release_and_must_free_container(ContainerPtr container) {
+ bool should_free = release_container(container);
assert(should_free, "should have been the only one having a reference");
- free_mem_object(card_set);
+ free_mem_object(container);
}
class G1ReleaseCardsets : public StackObj {
G1CardSet* _card_set;
- using CardSetPtr = G1CardSet::CardSetPtr;
+ using ContainerPtr = G1CardSet::ContainerPtr;
- void coarsen_to_full(CardSetPtr* card_set_addr) {
+ void coarsen_to_full(ContainerPtr* container_addr) {
while (true) {
- CardSetPtr cur_card_set = Atomic::load_acquire(card_set_addr);
- uint cs_type = G1CardSet::card_set_type(cur_card_set);
- if (cur_card_set == G1CardSet::FullCardSet) {
+ ContainerPtr cur_container = Atomic::load_acquire(container_addr);
+ uint cs_type = G1CardSet::container_type(cur_container);
+ if (cur_container == G1CardSet::FullCardSet) {
return;
}
- CardSetPtr old_value = Atomic::cmpxchg(card_set_addr, cur_card_set, G1CardSet::FullCardSet);
+ ContainerPtr old_value = Atomic::cmpxchg(container_addr, cur_container, G1CardSet::FullCardSet);
- if (old_value == cur_card_set) {
- _card_set->release_and_maybe_free_card_set(cur_card_set);
+ if (old_value == cur_container) {
+ _card_set->release_and_maybe_free_container(cur_container);
return;
}
}
@@ -446,51 +432,51 @@ class G1ReleaseCardsets : public StackObj {
public:
explicit G1ReleaseCardsets(G1CardSet* card_set) : _card_set(card_set) { }
- void operator ()(CardSetPtr* card_set_addr) {
- coarsen_to_full(card_set_addr);
+ void operator ()(ContainerPtr* container_addr) {
+ coarsen_to_full(container_addr);
}
};
-G1AddCardResult G1CardSet::add_to_array(CardSetPtr card_set, uint card_in_region) {
- G1CardSetArray* array = card_set_ptr(card_set);
+G1AddCardResult G1CardSet::add_to_array(ContainerPtr container, uint card_in_region) {
+ G1CardSetArray* array = container_ptr(container);
return array->add(card_in_region);
}
-G1AddCardResult G1CardSet::add_to_howl(CardSetPtr parent_card_set,
- uint card_region,
- uint card_in_region,
- bool increment_total) {
- G1CardSetHowl* howl = card_set_ptr(parent_card_set);
+G1AddCardResult G1CardSet::add_to_howl(ContainerPtr parent_container,
+ uint card_region,
+ uint card_in_region,
+ bool increment_total) {
+ G1CardSetHowl* howl = container_ptr(parent_container);
G1AddCardResult add_result;
- CardSetPtr to_transfer = nullptr;
- CardSetPtr card_set;
+ ContainerPtr to_transfer = nullptr;
+ ContainerPtr container;
uint bucket = _config->howl_bucket_index(card_in_region);
- volatile CardSetPtr* bucket_entry = howl->get_card_set_addr(bucket);
+ ContainerPtr volatile* bucket_entry = howl->get_container_addr(bucket);
while (true) {
if (Atomic::load(&howl->_num_entries) >= _config->cards_in_howl_threshold()) {
return Overflow;
}
- card_set = acquire_card_set(bucket_entry);
- add_result = add_to_card_set(bucket_entry, card_set, card_region, card_in_region);
+ container = acquire_container(bucket_entry);
+ add_result = add_to_container(bucket_entry, container, card_region, card_in_region);
if (add_result != Overflow) {
break;
}
- // Card set has overflown. Coarsen or retry.
- bool coarsened = coarsen_card_set(bucket_entry, card_set, card_in_region, true /* within_howl */);
- _coarsen_stats.record_coarsening(card_set_type(card_set) + G1CardSetCoarsenStats::CoarsenHowlOffset, !coarsened);
+ // Card set container has overflown. Coarsen or retry.
+ bool coarsened = coarsen_container(bucket_entry, container, card_in_region, true /* within_howl */);
+ _coarsen_stats.record_coarsening(container_type(container) + G1CardSetCoarsenStats::CoarsenHowlOffset, !coarsened);
if (coarsened) {
- // We have been the one coarsening this card set (and in the process added that card).
+ // We successful coarsened this card set container (and in the process added the card).
add_result = Added;
- to_transfer = card_set;
+ to_transfer = container;
break;
}
// Somebody else beat us to coarsening. Retry.
- release_and_maybe_free_card_set(card_set);
+ release_and_maybe_free_container(container);
}
if (increment_total && add_result == Added) {
@@ -498,91 +484,91 @@ G1AddCardResult G1CardSet::add_to_howl(CardSetPtr parent_card_set,
}
if (to_transfer != nullptr) {
- transfer_cards_in_howl(parent_card_set, to_transfer, card_region);
+ transfer_cards_in_howl(parent_container, to_transfer, card_region);
}
- release_and_maybe_free_card_set(card_set);
+ release_and_maybe_free_container(container);
return add_result;
}
-G1AddCardResult G1CardSet::add_to_bitmap(CardSetPtr card_set, uint card_in_region) {
- G1CardSetBitMap* bitmap = card_set_ptr(card_set);
+G1AddCardResult G1CardSet::add_to_bitmap(ContainerPtr container, uint card_in_region) {
+ G1CardSetBitMap* bitmap = container_ptr(container);
uint card_offset = _config->howl_bitmap_offset(card_in_region);
return bitmap->add(card_offset, _config->cards_in_howl_bitmap_threshold(), _config->max_cards_in_howl_bitmap());
}
-G1AddCardResult G1CardSet::add_to_inline_ptr(CardSetPtr volatile* card_set_addr, CardSetPtr card_set, uint card_in_region) {
- G1CardSetInlinePtr value(card_set_addr, card_set);
+G1AddCardResult G1CardSet::add_to_inline_ptr(ContainerPtr volatile* container_addr, ContainerPtr container, uint card_in_region) {
+ G1CardSetInlinePtr value(container_addr, container);
return value.add(card_in_region, _config->inline_ptr_bits_per_card(), _config->max_cards_in_inline_ptr());
}
-G1CardSet::CardSetPtr G1CardSet::create_coarsened_array_of_cards(uint card_in_region, bool within_howl) {
+G1CardSet::ContainerPtr G1CardSet::create_coarsened_array_of_cards(uint card_in_region, bool within_howl) {
uint8_t* data = nullptr;
- CardSetPtr new_card_set;
+ ContainerPtr new_container;
if (within_howl) {
uint const size_in_bits = _config->max_cards_in_howl_bitmap();
- uint card_offset = _config->howl_bitmap_offset(card_in_region);
- data = allocate_mem_object(CardSetBitMap);
- new (data) G1CardSetBitMap(card_offset, size_in_bits);
- new_card_set = make_card_set_ptr(data, CardSetBitMap);
+ uint container_offset = _config->howl_bitmap_offset(card_in_region);
+ data = allocate_mem_object(ContainerBitMap);
+ new (data) G1CardSetBitMap(container_offset, size_in_bits);
+ new_container = make_container_ptr(data, ContainerBitMap);
} else {
- data = allocate_mem_object(CardSetHowl);
+ data = allocate_mem_object(ContainerHowl);
new (data) G1CardSetHowl(card_in_region, _config);
- new_card_set = make_card_set_ptr(data, CardSetHowl);
+ new_container = make_container_ptr(data, ContainerHowl);
}
- return new_card_set;
+ return new_container;
}
-bool G1CardSet::coarsen_card_set(volatile CardSetPtr* card_set_addr,
- CardSetPtr cur_card_set,
- uint card_in_region,
- bool within_howl) {
- CardSetPtr new_card_set = nullptr;
+bool G1CardSet::coarsen_container(ContainerPtr volatile* container_addr,
+ ContainerPtr cur_container,
+ uint card_in_region,
+ bool within_howl) {
+ ContainerPtr new_container = nullptr;
- switch (card_set_type(cur_card_set)) {
- case CardSetArrayOfCards : {
- new_card_set = create_coarsened_array_of_cards(card_in_region, within_howl);
+ switch (container_type(cur_container)) {
+ case ContainerArrayOfCards: {
+ new_container = create_coarsened_array_of_cards(card_in_region, within_howl);
break;
}
- case CardSetBitMap: {
- new_card_set = FullCardSet;
+ case ContainerBitMap: {
+ new_container = FullCardSet;
break;
}
- case CardSetInlinePtr: {
+ case ContainerInlinePtr: {
uint const size = _config->max_cards_in_array();
- uint8_t* data = allocate_mem_object(CardSetArrayOfCards);
+ uint8_t* data = allocate_mem_object(ContainerArrayOfCards);
new (data) G1CardSetArray(card_in_region, size);
- new_card_set = make_card_set_ptr(data, CardSetArrayOfCards);
+ new_container = make_container_ptr(data, ContainerArrayOfCards);
break;
}
- case CardSetHowl: {
- new_card_set = FullCardSet; // anything will do at this point.
+ case ContainerHowl: {
+ new_container = FullCardSet; // anything will do at this point.
break;
}
default:
ShouldNotReachHere();
}
- CardSetPtr old_value = Atomic::cmpxchg(card_set_addr, cur_card_set, new_card_set); // Memory order?
- if (old_value == cur_card_set) {
+ ContainerPtr old_value = Atomic::cmpxchg(container_addr, cur_container, new_container); // Memory order?
+ if (old_value == cur_container) {
// Success. Indicate that the cards from the current card set must be transferred
// by this caller.
// Release the hash table reference to the card. The caller still holds the
// reference to this card set, so it can never be released (and we do not need to
// check its result).
- bool should_free = release_card_set(cur_card_set);
+ bool should_free = release_container(cur_container);
assert(!should_free, "must have had more than one reference");
- // Free containers if cur_card_set is CardSetHowl
- if (card_set_type(cur_card_set) == CardSetHowl) {
+ // Free containers if cur_container is ContainerHowl
+ if (container_type(cur_container) == ContainerHowl) {
G1ReleaseCardsets rel(this);
- card_set_ptr(cur_card_set)->iterate(rel, _config->num_buckets_in_howl());
+ container_ptr(cur_container)->iterate(rel, _config->num_buckets_in_howl());
}
return true;
} else {
// Somebody else beat us to coarsening that card set. Exit, but clean up first.
- if (new_card_set != FullCardSet) {
- assert(new_card_set != nullptr, "must not be");
- release_and_must_free_card_set(new_card_set);
+ if (new_container != FullCardSet) {
+ assert(new_container != nullptr, "must not be");
+ release_and_must_free_container(new_container);
}
return false;
}
@@ -599,34 +585,34 @@ public:
}
};
-void G1CardSet::transfer_cards(G1CardSetHashTableValue* table_entry, CardSetPtr source_card_set, uint card_region) {
- assert(source_card_set != FullCardSet, "Should not need to transfer from full");
- // Need to transfer old entries unless there is a Full card set in place now, i.e.
- // the old type has been CardSetBitMap. "Full" contains all elements anyway.
- if (card_set_type(source_card_set) != CardSetHowl) {
+void G1CardSet::transfer_cards(G1CardSetHashTableValue* table_entry, ContainerPtr source_container, uint card_region) {
+ assert(source_container != FullCardSet, "Should not need to transfer from FullCardSet");
+ // Need to transfer old entries unless there is a Full card set container in place now, i.e.
+ // the old type has been ContainerBitMap. "Full" contains all elements anyway.
+ if (container_type(source_container) != ContainerHowl) {
G1TransferCard iter(this, card_region);
- iterate_cards_during_transfer(source_card_set, iter);
+ iterate_cards_during_transfer(source_container, iter);
} else {
- assert(card_set_type(source_card_set) == CardSetHowl, "must be");
+ assert(container_type(source_container) == ContainerHowl, "must be");
// Need to correct for that the Full remembered set occupies more cards than the
// AoCS before.
Atomic::add(&_num_occupied, _config->max_cards_in_region() - table_entry->_num_occupied, memory_order_relaxed);
}
}
-void G1CardSet::transfer_cards_in_howl(CardSetPtr parent_card_set,
- CardSetPtr source_card_set,
- uint card_region) {
- assert(card_set_type(parent_card_set) == CardSetHowl, "must be");
- assert(source_card_set != FullCardSet, "Should not need to transfer from full");
+void G1CardSet::transfer_cards_in_howl(ContainerPtr parent_container,
+ ContainerPtr source_container,
+ uint card_region) {
+ assert(container_type(parent_container) == ContainerHowl, "must be");
+ assert(source_container != FullCardSet, "Should not need to transfer from full");
// Need to transfer old entries unless there is a Full card set in place now, i.e.
- // the old type has been CardSetBitMap.
- if (card_set_type(source_card_set) != CardSetBitMap) {
- // We only need to transfer from anything below CardSetBitMap.
+ // the old type has been ContainerBitMap.
+ if (container_type(source_container) != ContainerBitMap) {
+ // We only need to transfer from anything below ContainerBitMap.
G1TransferCard iter(this, card_region);
- iterate_cards_during_transfer(source_card_set, iter);
+ iterate_cards_during_transfer(source_container, iter);
} else {
- uint diff = _config->max_cards_in_howl_bitmap() - card_set_ptr(source_card_set)->num_bits_set();
+ uint diff = _config->max_cards_in_howl_bitmap() - container_ptr(source_container)->num_bits_set();
// Need to correct for that the Full remembered set occupies more cards than the
// bitmap before.
@@ -635,10 +621,10 @@ void G1CardSet::transfer_cards_in_howl(CardSetPtr parent_card_set,
// G1CardSet::add_to_howl after coarsening.
diff -= 1;
- G1CardSetHowl* howling_array = card_set_ptr(parent_card_set);
+ G1CardSetHowl* howling_array = container_ptr(parent_container);
Atomic::add(&howling_array->_num_entries, diff, memory_order_relaxed);
- G1CardSetHashTableValue* table_entry = get_card_set(card_region);
+ G1CardSetHashTableValue* table_entry = get_container(card_region);
assert(table_entry != nullptr, "Table entry not found for transferred cards");
Atomic::add(&table_entry->_num_occupied, diff, memory_order_relaxed);
@@ -647,72 +633,75 @@ void G1CardSet::transfer_cards_in_howl(CardSetPtr parent_card_set,
}
}
-G1AddCardResult G1CardSet::add_to_card_set(volatile CardSetPtr* card_set_addr, CardSetPtr card_set, uint card_region, uint card_in_region, bool increment_total) {
- assert(card_set_addr != nullptr, "Cannot add to empty cardset");
+G1AddCardResult G1CardSet::add_to_container(ContainerPtr volatile* container_addr,
+ ContainerPtr container,
+ uint card_region,
+ uint card_in_region,
+ bool increment_total) {
+ assert(container_addr != nullptr, "must be");
G1AddCardResult add_result;
- switch (card_set_type(card_set)) {
- case CardSetInlinePtr: {
- add_result = add_to_inline_ptr(card_set_addr, card_set, card_in_region);
+ switch (container_type(container)) {
+ case ContainerInlinePtr: {
+ add_result = add_to_inline_ptr(container_addr, container, card_in_region);
break;
}
- case CardSetArrayOfCards : {
- add_result = add_to_array(card_set, card_in_region);
+ case ContainerArrayOfCards: {
+ add_result = add_to_array(container, card_in_region);
break;
}
- case CardSetBitMap: {
- add_result = add_to_bitmap(card_set, card_in_region);
+ case ContainerBitMap: {
+ add_result = add_to_bitmap(container, card_in_region);
break;
}
- case CardSetHowl: {
- assert(CardSetHowl == card_set_type(FullCardSet), "must be");
- if (card_set == FullCardSet) {
+ case ContainerHowl: {
+ assert(ContainerHowl == container_type(FullCardSet), "must be");
+ if (container == FullCardSet) {
return Found;
}
- add_result = add_to_howl(card_set, card_region, card_in_region, increment_total);
+ add_result = add_to_howl(container, card_region, card_in_region, increment_total);
break;
}
default:
ShouldNotReachHere();
}
-
return add_result;
}
-G1CardSetHashTableValue* G1CardSet::get_or_add_card_set(uint card_region, bool* should_grow_table) {
+G1CardSetHashTableValue* G1CardSet::get_or_add_container(uint card_region, bool* should_grow_table) {
return _table->get_or_add(card_region, should_grow_table);
}
-G1CardSetHashTableValue* G1CardSet::get_card_set(uint card_region) {
+G1CardSetHashTableValue* G1CardSet::get_container(uint card_region) {
return _table->get(card_region);
}
G1AddCardResult G1CardSet::add_card(uint card_region, uint card_in_region, bool increment_total) {
G1AddCardResult add_result;
- CardSetPtr to_transfer = nullptr;
- CardSetPtr card_set;
+ ContainerPtr to_transfer = nullptr;
+ ContainerPtr container;
bool should_grow_table = false;
- G1CardSetHashTableValue* table_entry = get_or_add_card_set(card_region, &should_grow_table);
+ G1CardSetHashTableValue* table_entry = get_or_add_container(card_region, &should_grow_table);
while (true) {
- card_set = acquire_card_set(&table_entry->_card_set);
- add_result = add_to_card_set(&table_entry->_card_set, card_set, card_region, card_in_region, increment_total);
+ container = acquire_container(&table_entry->_container);
+ add_result = add_to_container(&table_entry->_container, container, card_region, card_in_region, increment_total);
if (add_result != Overflow) {
break;
}
// Card set has overflown. Coarsen or retry.
- bool coarsened = coarsen_card_set(&table_entry->_card_set, card_set, card_in_region);
- _coarsen_stats.record_coarsening(card_set_type(card_set), !coarsened);
+ bool coarsened = coarsen_container(&table_entry->_container, container, card_in_region);
+ _coarsen_stats.record_coarsening(container_type(container), !coarsened);
if (coarsened) {
- // We have been the one coarsening this card set (and in the process added that card).
+ // We successful coarsened this card set container (and in the process added the card).
add_result = Added;
- to_transfer = card_set;
+ to_transfer = container;
break;
}
// Somebody else beat us to coarsening. Retry.
- release_and_maybe_free_card_set(card_set);
+ release_and_maybe_free_container(container);
}
if (increment_total && add_result == Added) {
@@ -726,7 +715,7 @@ G1AddCardResult G1CardSet::add_card(uint card_region, uint card_in_region, bool
transfer_cards(table_entry, to_transfer, card_region);
}
- release_and_maybe_free_card_set(card_set);
+ release_and_maybe_free_container(container);
return add_result;
}
@@ -735,29 +724,29 @@ bool G1CardSet::contains_card(uint card_region, uint card_in_region) {
assert(card_in_region < _config->max_cards_in_region(),
"Card %u is beyond max %u", card_in_region, _config->max_cards_in_region());
- // Protect the card set from reclamation.
+ // Protect the card set container from reclamation.
GlobalCounter::CriticalSection cs(Thread::current());
- G1CardSetHashTableValue* table_entry = get_card_set(card_region);
+ G1CardSetHashTableValue* table_entry = get_container(card_region);
if (table_entry == nullptr) {
return false;
}
- CardSetPtr card_set = table_entry->_card_set;
- if (card_set == FullCardSet) {
+ ContainerPtr container = table_entry->_container;
+ if (container == FullCardSet) {
// contains_card() is not a performance critical method so we do not hide that
// case in the switch below.
return true;
}
- switch (card_set_type(card_set)) {
- case CardSetInlinePtr: {
- G1CardSetInlinePtr ptr(card_set);
+ switch (container_type(container)) {
+ case ContainerInlinePtr: {
+ G1CardSetInlinePtr ptr(container);
return ptr.contains(card_in_region, _config->inline_ptr_bits_per_card());
}
- case CardSetArrayOfCards : return card_set_ptr(card_set)->contains(card_in_region);
- case CardSetBitMap: return card_set_ptr(card_set)->contains(card_in_region, _config->max_cards_in_howl_bitmap());
- case CardSetHowl: {
- G1CardSetHowl* howling_array = card_set_ptr(card_set);
+ case ContainerArrayOfCards: return container_ptr(container)->contains(card_in_region);
+ case ContainerBitMap: return container_ptr(container)->contains(card_in_region, _config->max_cards_in_howl_bitmap());
+ case ContainerHowl: {
+ G1CardSetHowl* howling_array = container_ptr(container);
return howling_array->contains(card_in_region, _config);
}
@@ -767,53 +756,53 @@ bool G1CardSet::contains_card(uint card_region, uint card_in_region) {
}
void G1CardSet::print_info(outputStream* st, uint card_region, uint card_in_region) {
- G1CardSetHashTableValue* table_entry = get_card_set(card_region);
+ G1CardSetHashTableValue* table_entry = get_container(card_region);
if (table_entry == nullptr) {
st->print("NULL card set");
return;
}
- CardSetPtr card_set = table_entry->_card_set;
- if (card_set == FullCardSet) {
+ ContainerPtr container = table_entry->_container;
+ if (container == FullCardSet) {
st->print("FULL card set)");
return;
}
- switch (card_set_type(card_set)) {
- case CardSetInlinePtr: {
+ switch (container_type(container)) {
+ case ContainerInlinePtr: {
st->print("InlinePtr not containing %u", card_in_region);
break;
}
- case CardSetArrayOfCards : {
+ case ContainerArrayOfCards: {
st->print("AoC not containing %u", card_in_region);
break;
}
- case CardSetBitMap: {
+ case ContainerBitMap: {
st->print("BitMap not containing %u", card_in_region);
break;
}
- case CardSetHowl: {
- st->print("CardSetHowl not containing %u", card_in_region);
+ case ContainerHowl: {
+ st->print("ContainerHowl not containing %u", card_in_region);
break;
}
- default: st->print("Unknown card set type %u", card_set_type(card_set)); ShouldNotReachHere(); break;
+ default: st->print("Unknown card set container type %u", container_type(container)); ShouldNotReachHere(); break;
}
}
template
-void G1CardSet::iterate_cards_during_transfer(CardSetPtr const card_set, CardVisitor& cl) {
- uint type = card_set_type(card_set);
- assert(type == CardSetInlinePtr || type == CardSetArrayOfCards,
+void G1CardSet::iterate_cards_during_transfer(ContainerPtr const container, CardVisitor& cl) {
+ uint type = container_type(container);
+ assert(type == ContainerInlinePtr || type == ContainerArrayOfCards,
"invalid card set type %d to transfer from",
- card_set_type(card_set));
+ container_type(container));
switch (type) {
- case CardSetInlinePtr: {
- G1CardSetInlinePtr ptr(card_set);
+ case ContainerInlinePtr: {
+ G1CardSetInlinePtr ptr(container);
ptr.iterate(cl, _config->inline_ptr_bits_per_card());
return;
}
- case CardSetArrayOfCards : {
- card_set_ptr(card_set)->iterate(cl);
+ case ContainerArrayOfCards: {
+ container_ptr(container)->iterate(cl);
return;
}
default:
@@ -821,7 +810,7 @@ void G1CardSet::iterate_cards_during_transfer(CardSetPtr const card_set, CardVis
}
}
-void G1CardSet::iterate_containers(CardSetPtrClosure* cl, bool at_safepoint) {
+void G1CardSet::iterate_containers(ContainerPtrClosure* cl, bool at_safepoint) {
if (at_safepoint) {
_table->iterate_safepoint(cl);
} else {
@@ -852,7 +841,7 @@ public:
};
template class CardOrRanges>
-class G1CardSetContainersClosure : public G1CardSet::CardSetPtrClosure {
+class G1CardSetContainersClosure : public G1CardSet::ContainerPtrClosure {
G1CardSet* _card_set;
Closure& _cl;
@@ -863,9 +852,9 @@ public:
_card_set(card_set),
_cl(cl) { }
- void do_cardsetptr(uint region_idx, size_t num_occupied, G1CardSet::CardSetPtr card_set) override {
+ void do_containerptr(uint region_idx, size_t num_occupied, G1CardSet::ContainerPtr container) override {
CardOrRanges cl(_cl, region_idx);
- _card_set->iterate_cards_or_ranges_in_container(card_set, cl);
+ _card_set->iterate_cards_or_ranges_in_container(container, cl);
}
};
@@ -887,13 +876,13 @@ size_t G1CardSet::occupied() const {
}
size_t G1CardSet::num_containers() {
- class GetNumberOfContainers : public CardSetPtrClosure {
+ class GetNumberOfContainers : public ContainerPtrClosure {
public:
size_t _count;
- GetNumberOfContainers() : CardSetPtrClosure(), _count(0) { }
+ GetNumberOfContainers() : ContainerPtrClosure(), _count(0) { }
- void do_cardsetptr(uint region_idx, size_t num_occupied, CardSetPtr card_set) override {
+ void do_containerptr(uint region_idx, size_t num_occupied, ContainerPtr container) override {
_count++;
}
} cl;
diff --git a/src/hotspot/share/gc/g1/g1CardSet.hpp b/src/hotspot/share/gc/g1/g1CardSet.hpp
index 465984d713873d76bb52e42928534b0c364eb39c..946d8cb73382b954ecf23d23124c909edc3e477f 100644
--- a/src/hotspot/share/gc/g1/g1CardSet.hpp
+++ b/src/hotspot/share/gc/g1/g1CardSet.hpp
@@ -26,10 +26,7 @@
#define SHARE_GC_G1_G1CARDSET_HPP
#include "memory/allocation.hpp"
-#include "memory/padded.hpp"
-#include "oops/oopsHierarchy.hpp"
#include "utilities/concurrentHashTable.hpp"
-#include "utilities/lockFreeStack.hpp"
class G1CardSetAllocOptions;
class G1CardSetHashTable;
@@ -147,10 +144,10 @@ public:
class G1CardSetCoarsenStats {
public:
// Number of entries in the statistics tables: since we index with the source
- // cardset of the coarsening, this is the total number of combinations of
- // card sets - 1.
+ // container of the coarsening, this is the total number of combinations of
+ // card set containers - 1.
static constexpr size_t NumCoarsenCategories = 7;
- // Coarsening statistics for the possible CardSetPtr in the Howl card set
+ // Coarsening statistics for the possible ContainerPtr in the Howl card set
// start from this offset.
static constexpr size_t CoarsenHowlOffset = 4;
@@ -173,14 +170,14 @@ public:
void print_on(outputStream* out);
};
-// Sparse set of card indexes comprising a remembered set on the Java heap. Card
+// Set of card indexes comprising a remembered set on the Java heap. Card
// size is assumed to be card table card size.
//
// Technically it is implemented using a ConcurrentHashTable that stores a card
// set container for every region containing at least one card.
//
// There are in total five different containers, encoded in the ConcurrentHashTable
-// node as CardSetPtr. A CardSetPtr may cover the whole region or just a part of
+// node as ContainerPtr. A ContainerPtr may cover the whole region or just a part of
// it.
// See its description below for more information.
class G1CardSet : public CHeapObj {
@@ -194,46 +191,46 @@ class G1CardSet : public CHeapObj {
static G1CardSetCoarsenStats _coarsen_stats; // Coarsening statistics since VM start.
static G1CardSetCoarsenStats _last_coarsen_stats; // Coarsening statistics at last GC.
public:
- // Two lower bits are used to encode the card storage types
- static const uintptr_t CardSetPtrHeaderSize = 2;
+ // Two lower bits are used to encode the card set container types
+ static const uintptr_t ContainerPtrHeaderSize = 2;
- // CardSetPtr represents the card storage type of a given covered area. It encodes
- // a type in the LSBs, in addition to having a few significant values.
+ // ContainerPtr represents the card set container type of a given covered area.
+ // It encodes a type in the LSBs, in addition to having a few significant values.
//
// Possible encodings:
//
// 0...00000 free (Empty, should never happen)
- // 1...11111 full All card indexes in the whole area this CardSetPtr covers are part of this container.
- // X...XXX00 inline-ptr-cards A handful of card indexes covered by this CardSetPtr are encoded within the CardSetPtr.
+ // 1...11111 full All card indexes in the whole area this ContainerPtr covers are part of this container.
+ // X...XXX00 inline-ptr-cards A handful of card indexes covered by this ContainerPtr are encoded within the ContainerPtr.
// X...XXX01 array of cards The container is a contiguous array of card indexes.
// X...XXX10 bitmap The container uses a bitmap to determine whether a given index is part of this set.
- // X...XXX11 howl This is a card set container containing an array of CardSetPtr, with each CardSetPtr
+ // X...XXX11 howl This is a card set container containing an array of ContainerPtr, with each ContainerPtr
// limited to a sub-range of the original range. Currently only one level of this
// container is supported.
- typedef void* CardSetPtr;
+ using ContainerPtr = void*;
// Coarsening happens in the order below:
- // CardSetInlinePtr -> CardSetArrayOfCards -> CardSetHowl -> Full
- // Corsening of containers inside the CardSetHowl happens in the order:
- // CardSetInlinePtr -> CardSetArrayOfCards -> CardSetBitMap -> Full
- static const uintptr_t CardSetInlinePtr = 0x0;
- static const uintptr_t CardSetArrayOfCards = 0x1;
- static const uintptr_t CardSetBitMap = 0x2;
- static const uintptr_t CardSetHowl = 0x3;
+ // ContainerInlinePtr -> ContainerArrayOfCards -> ContainerHowl -> Full
+ // Corsening of containers inside the ContainerHowl happens in the order:
+ // ContainerInlinePtr -> ContainerArrayOfCards -> ContainerBitMap -> Full
+ static const uintptr_t ContainerInlinePtr = 0x0;
+ static const uintptr_t ContainerArrayOfCards = 0x1;
+ static const uintptr_t ContainerBitMap = 0x2;
+ static const uintptr_t ContainerHowl = 0x3;
// The special sentinel values
- static constexpr CardSetPtr FreeCardSet = nullptr;
- // Unfortunately we can't make (G1CardSet::CardSetPtr)-1 constexpr because
+ static constexpr ContainerPtr FreeCardSet = nullptr;
+ // Unfortunately we can't make (G1CardSet::ContainerPtr)-1 constexpr because
// reinterpret_casts are forbidden in constexprs. Use a regular static instead.
- static CardSetPtr FullCardSet;
+ static ContainerPtr FullCardSet;
- static const uintptr_t CardSetPtrTypeMask = ((uintptr_t)1 << CardSetPtrHeaderSize) - 1;
+ static const uintptr_t ContainerPtrTypeMask = ((uintptr_t)1 << ContainerPtrHeaderSize) - 1;
- static CardSetPtr strip_card_set_type(CardSetPtr ptr) { return (CardSetPtr)((uintptr_t)ptr & ~CardSetPtrTypeMask); }
+ static ContainerPtr strip_container_type(ContainerPtr ptr) { return (ContainerPtr)((uintptr_t)ptr & ~ContainerPtrTypeMask); }
- static uint card_set_type(CardSetPtr ptr) { return (uintptr_t)ptr & CardSetPtrTypeMask; }
+ static uint container_type(ContainerPtr ptr) { return (uintptr_t)ptr & ContainerPtrTypeMask; }
template
- static T* card_set_ptr(CardSetPtr ptr);
+ static T* container_ptr(ContainerPtr ptr);
private:
G1CardSetMemoryManager* _mm;
@@ -245,42 +242,42 @@ private:
// be (slightly) more cards in the card set than this value in reality.
size_t _num_occupied;
- CardSetPtr make_card_set_ptr(void* value, uintptr_t type);
+ ContainerPtr make_container_ptr(void* value, uintptr_t type);
- CardSetPtr acquire_card_set(CardSetPtr volatile* card_set_addr);
- // Returns true if the card set should be released
- bool release_card_set(CardSetPtr card_set);
+ ContainerPtr acquire_container(ContainerPtr volatile* container_addr);
+ // Returns true if the card set container should be released
+ bool release_container(ContainerPtr container);
// Release card set and free if needed.
- void release_and_maybe_free_card_set(CardSetPtr card_set);
+ void release_and_maybe_free_container(ContainerPtr container);
// Release card set and free (and it must be freeable).
- void release_and_must_free_card_set(CardSetPtr card_set);
+ void release_and_must_free_container(ContainerPtr container);
- // Coarsens the CardSet cur_card_set to the next level; tries to replace the
- // previous CardSet with a new one which includes the given card_in_region.
- // coarsen_card_set does not transfer cards from cur_card_set
- // to the new card_set. Transfer is achieved by transfer_cards.
- // Returns true if this was the thread that coarsened the CardSet (and added the card).
- bool coarsen_card_set(CardSetPtr volatile* card_set_addr,
- CardSetPtr cur_card_set,
- uint card_in_region, bool within_howl = false);
+ // Coarsens the card set container cur_container to the next level; tries to replace the
+ // previous ContainerPtr with a new one which includes the given card_in_region.
+ // coarsen_container does not transfer cards from cur_container
+ // to the new container. Transfer is achieved by transfer_cards.
+ // Returns true if this was the thread that coarsened the container (and added the card).
+ bool coarsen_container(ContainerPtr volatile* container_addr,
+ ContainerPtr cur_container,
+ uint card_in_region, bool within_howl = false);
- CardSetPtr create_coarsened_array_of_cards(uint card_in_region, bool within_howl);
+ ContainerPtr create_coarsened_array_of_cards(uint card_in_region, bool within_howl);
// Transfer entries from source_card_set to a recently installed coarser storage type
- // We only need to transfer anything finer than CardSetBitMap. "Full" contains
+ // We only need to transfer anything finer than ContainerBitMap. "Full" contains
// all elements anyway.
- void transfer_cards(G1CardSetHashTableValue* table_entry, CardSetPtr source_card_set, uint card_region);
- void transfer_cards_in_howl(CardSetPtr parent_card_set, CardSetPtr source_card_set, uint card_region);
+ void transfer_cards(G1CardSetHashTableValue* table_entry, ContainerPtr source_container, uint card_region);
+ void transfer_cards_in_howl(ContainerPtr parent_container, ContainerPtr source_container, uint card_region);
- G1AddCardResult add_to_card_set(CardSetPtr volatile* card_set_addr, CardSetPtr card_set, uint card_region, uint card, bool increment_total = true);
+ G1AddCardResult add_to_container(ContainerPtr volatile* container_addr, ContainerPtr container, uint card_region, uint card, bool increment_total = true);
- G1AddCardResult add_to_inline_ptr(CardSetPtr volatile* card_set_addr, CardSetPtr card_set, uint card_in_region);
- G1AddCardResult add_to_array(CardSetPtr card_set, uint card_in_region);
- G1AddCardResult add_to_bitmap(CardSetPtr card_set, uint card_in_region);
- G1AddCardResult add_to_howl(CardSetPtr parent_card_set, uint card_region, uint card_in_region, bool increment_total = true);
+ G1AddCardResult add_to_inline_ptr(ContainerPtr volatile* container_addr, ContainerPtr container, uint card_in_region);
+ G1AddCardResult add_to_array(ContainerPtr container, uint card_in_region);
+ G1AddCardResult add_to_bitmap(ContainerPtr container, uint card_in_region);
+ G1AddCardResult add_to_howl(ContainerPtr parent_container, uint card_region, uint card_in_region, bool increment_total = true);
- G1CardSetHashTableValue* get_or_add_card_set(uint card_region, bool* should_grow_table);
- G1CardSetHashTableValue* get_card_set(uint card_region);
+ G1CardSetHashTableValue* get_or_add_container(uint card_region, bool* should_grow_table);
+ G1CardSetHashTableValue* get_container(uint card_region);
// Iterate over cards of a card set container during transfer of the cards from
// one container to another. Executes
@@ -289,11 +286,11 @@ private:
//
// on the given class.
template
- void iterate_cards_during_transfer(CardSetPtr const card_set, CardVisitor& vl);
+ void iterate_cards_during_transfer(ContainerPtr const container, CardVisitor& vl);
- uint card_set_type_to_mem_object_type(uintptr_t type) const;
+ uint container_type_to_mem_object_type(uintptr_t type) const;
uint8_t* allocate_mem_object(uintptr_t type);
- void free_mem_object(CardSetPtr card_set);
+ void free_mem_object(ContainerPtr container);
public:
G1CardSetConfiguration* config() const { return _config; }
@@ -302,8 +299,8 @@ public:
G1CardSet(G1CardSetConfiguration* config, G1CardSetMemoryManager* mm);
virtual ~G1CardSet();
- // Adds the given card to this set, returning an appropriate result. If added,
- // updates the total count.
+ // Adds the given card to this set, returning an appropriate result.
+ // If incremental_count is true and the card has been added, updates the total count.
G1AddCardResult add_card(uint card_region, uint card_in_region, bool increment_total = true);
bool contains_card(uint card_region, uint card_in_region);
@@ -351,14 +348,14 @@ public:
// start_iterate().
//
template
- void iterate_cards_or_ranges_in_container(CardSetPtr const card_set, CardOrRangeVisitor& cl);
+ void iterate_cards_or_ranges_in_container(ContainerPtr const container, CardOrRangeVisitor& cl);
- class CardSetPtrClosure {
+ class ContainerPtrClosure {
public:
- virtual void do_cardsetptr(uint region_idx, size_t num_occupied, CardSetPtr card_set) = 0;
+ virtual void do_containerptr(uint region_idx, size_t num_occupied, ContainerPtr container) = 0;
};
- void iterate_containers(CardSetPtrClosure* cl, bool safepoint = false);
+ void iterate_containers(ContainerPtrClosure* cl, bool safepoint = false);
class CardClosure {
public:
@@ -370,13 +367,13 @@ public:
class G1CardSetHashTableValue {
public:
- using CardSetPtr = G1CardSet::CardSetPtr;
+ using ContainerPtr = G1CardSet::ContainerPtr;
const uint _region_idx;
uint volatile _num_occupied;
- CardSetPtr volatile _card_set;
+ ContainerPtr volatile _container;
- G1CardSetHashTableValue(uint region_idx, CardSetPtr card_set) : _region_idx(region_idx), _num_occupied(0), _card_set(card_set) { }
+ G1CardSetHashTableValue(uint region_idx, ContainerPtr container) : _region_idx(region_idx), _num_occupied(0), _container(container) { }
};
class G1CardSetHashTableConfig : public StackObj {
@@ -391,6 +388,6 @@ public:
static void free_node(void* context, void* memory, Value const& value);
};
-typedef ConcurrentHashTable CardSetHash;
+using CardSetHash = ConcurrentHashTable;
#endif // SHARE_GC_G1_G1CARDSET_HPP
diff --git a/src/hotspot/share/gc/g1/g1CardSet.inline.hpp b/src/hotspot/share/gc/g1/g1CardSet.inline.hpp
index 99938b4b74eb55313e244ecfebe5b07806dd9c55..49d7928735a300f577aa4cdc6c85b30d843de5a5 100644
--- a/src/hotspot/share/gc/g1/g1CardSet.inline.hpp
+++ b/src/hotspot/share/gc/g1/g1CardSet.inline.hpp
@@ -28,55 +28,54 @@
#include "gc/g1/g1CardSet.hpp"
#include "gc/g1/g1CardSetContainers.inline.hpp"
#include "gc/g1/g1GCPhaseTimes.hpp"
-#include "runtime/atomic.hpp"
#include "logging/log.hpp"
template
-inline T* G1CardSet::card_set_ptr(CardSetPtr ptr) {
- return (T*)strip_card_set_type(ptr);
+inline T* G1CardSet::container_ptr(ContainerPtr ptr) {
+ return (T*)strip_container_type(ptr);
}
-inline G1CardSet::CardSetPtr G1CardSet::make_card_set_ptr(void* value, uintptr_t type) {
- assert(card_set_type(value) == 0, "Given ptr " PTR_FORMAT " already has type bits set", p2i(value));
- return (CardSetPtr)((uintptr_t)value | type);
+inline G1CardSet::ContainerPtr G1CardSet::make_container_ptr(void* value, uintptr_t type) {
+ assert(container_type(value) == 0, "Given ptr " PTR_FORMAT " already has type bits set", p2i(value));
+ return (ContainerPtr)((uintptr_t)value | type);
}
template
-inline void G1CardSet::iterate_cards_or_ranges_in_container(CardSetPtr const card_set, CardOrRangeVisitor& cl) {
- switch (card_set_type(card_set)) {
- case CardSetInlinePtr: {
+inline void G1CardSet::iterate_cards_or_ranges_in_container(ContainerPtr const container, CardOrRangeVisitor& cl) {
+ switch (container_type(container)) {
+ case ContainerInlinePtr: {
if (cl.start_iterate(G1GCPhaseTimes::MergeRSMergedInline)) {
- G1CardSetInlinePtr ptr(card_set);
+ G1CardSetInlinePtr ptr(container);
ptr.iterate(cl, _config->inline_ptr_bits_per_card());
}
return;
}
- case CardSetArrayOfCards : {
+ case ContainerArrayOfCards: {
if (cl.start_iterate(G1GCPhaseTimes::MergeRSMergedArrayOfCards)) {
- card_set_ptr(card_set)->iterate(cl);
+ container_ptr(container)->iterate(cl);
}
return;
}
- case CardSetBitMap: {
+ case ContainerBitMap: {
// There is no first-level bitmap spanning the whole area.
ShouldNotReachHere();
return;
}
- case CardSetHowl: {
- assert(card_set_type(FullCardSet) == CardSetHowl, "Must be");
- if (card_set == FullCardSet) {
+ case ContainerHowl: {
+ assert(container_type(FullCardSet) == ContainerHowl, "Must be");
+ if (container == FullCardSet) {
if (cl.start_iterate(G1GCPhaseTimes::MergeRSMergedFull)) {
cl(0, _config->max_cards_in_region());
}
return;
}
if (cl.start_iterate(G1GCPhaseTimes::MergeRSMergedHowl)) {
- card_set_ptr(card_set)->iterate(cl, _config);
+ container_ptr(container)->iterate(cl, _config);
}
return;
}
}
- log_error(gc)("Unkown card set type %u", card_set_type(card_set));
+ log_error(gc)("Unkown card set container type %u", container_type(container));
ShouldNotReachHere();
}
diff --git a/src/hotspot/share/gc/g1/g1CardSetContainers.hpp b/src/hotspot/share/gc/g1/g1CardSetContainers.hpp
index b74a68c84ae225644af4ee64f0b051843694cb0b..453594da3f9e570d0ad7da90999cd5bb544c14e7 100644
--- a/src/hotspot/share/gc/g1/g1CardSetContainers.hpp
+++ b/src/hotspot/share/gc/g1/g1CardSetContainers.hpp
@@ -28,15 +28,10 @@
#include "gc/g1/g1CardSet.hpp"
#include "memory/allocation.hpp"
#include "runtime/atomic.hpp"
-#include "utilities/bitMap.inline.hpp"
+#include "utilities/bitMap.hpp"
#include "utilities/globalDefinitions.hpp"
-#include "utilities/spinYield.hpp"
-#include "logging/log.hpp"
-
-#include "runtime/thread.inline.hpp"
-
-// A helper class to encode a few card indexes within a CardSetPtr.
+// A helper class to encode a few card indexes within a ContainerPtr.
//
// The pointer value (either 32 or 64 bits) is split into two areas:
//
@@ -70,16 +65,16 @@
class G1CardSetInlinePtr : public StackObj {
friend class G1CardSetContainersTest;
- typedef G1CardSet::CardSetPtr CardSetPtr;
+ using ContainerPtr = G1CardSet::ContainerPtr;
- CardSetPtr volatile * _value_addr;
- CardSetPtr _value;
+ ContainerPtr volatile * _value_addr;
+ ContainerPtr _value;
static const uint SizeFieldLen = 3;
static const uint SizeFieldPos = 2;
- static const uint HeaderSize = G1CardSet::CardSetPtrHeaderSize + SizeFieldLen;
+ static const uint HeaderSize = G1CardSet::ContainerPtrHeaderSize + SizeFieldLen;
- static const uint BitsInValue = sizeof(CardSetPtr) * BitsPerByte;
+ static const uint BitsInValue = sizeof(ContainerPtr) * BitsPerByte;
static const uintptr_t SizeFieldMask = (((uint)1 << SizeFieldLen) - 1) << SizeFieldPos;
@@ -87,9 +82,9 @@ class G1CardSetInlinePtr : public StackObj {
return (idx * bits_per_card + HeaderSize);
}
- static CardSetPtr merge(CardSetPtr orig_value, uint card_in_region, uint idx, uint bits_per_card);
+ static ContainerPtr merge(ContainerPtr orig_value, uint card_in_region, uint idx, uint bits_per_card);
- static uint card_at(CardSetPtr value, uint const idx, uint const bits_per_card) {
+ static uint card_at(ContainerPtr value, uint const idx, uint const bits_per_card) {
uint8_t card_pos = card_pos_for(idx, bits_per_card);
uint result = ((uintptr_t)value >> card_pos) & (((uintptr_t)1 << bits_per_card) - 1);
return result;
@@ -98,14 +93,14 @@ class G1CardSetInlinePtr : public StackObj {
uint find(uint const card_idx, uint const bits_per_card, uint start_at, uint num_cards);
public:
- G1CardSetInlinePtr() : _value_addr(nullptr), _value((CardSetPtr)G1CardSet::CardSetInlinePtr) { }
+ G1CardSetInlinePtr() : _value_addr(nullptr), _value((ContainerPtr)G1CardSet::ContainerInlinePtr) { }
- G1CardSetInlinePtr(CardSetPtr value) : _value_addr(nullptr), _value(value) {
- assert(G1CardSet::card_set_type(_value) == G1CardSet::CardSetInlinePtr, "Value " PTR_FORMAT " is not a valid G1CardSetInPtr.", p2i(_value));
+ G1CardSetInlinePtr(ContainerPtr value) : _value_addr(nullptr), _value(value) {
+ assert(G1CardSet::container_type(_value) == G1CardSet::ContainerInlinePtr, "Value " PTR_FORMAT " is not a valid G1CardSetInlinePtr.", p2i(_value));
}
- G1CardSetInlinePtr(CardSetPtr volatile* value_addr, CardSetPtr value) : _value_addr(value_addr), _value(value) {
- assert(G1CardSet::card_set_type(_value) == G1CardSet::CardSetInlinePtr, "Value " PTR_FORMAT " is not a valid G1CardSetInPtr.", p2i(_value));
+ G1CardSetInlinePtr(ContainerPtr volatile* value_addr, ContainerPtr value) : _value_addr(value_addr), _value(value) {
+ assert(G1CardSet::container_type(_value) == G1CardSet::ContainerInlinePtr, "Value " PTR_FORMAT " is not a valid G1CardSetInlinePtr.", p2i(_value));
}
G1AddCardResult add(uint const card_idx, uint const bits_per_card, uint const max_cards_in_inline_ptr);
@@ -115,13 +110,13 @@ public:
template
void iterate(CardVisitor& found, uint const bits_per_card);
- operator CardSetPtr () { return _value; }
+ operator ContainerPtr () { return _value; }
static uint max_cards_in_inline_ptr(uint bits_per_card) {
return (BitsInValue - HeaderSize) / bits_per_card;
}
- static uint num_cards_in(CardSetPtr value) {
+ static uint num_cards_in(ContainerPtr value) {
return ((uintptr_t)value & SizeFieldMask) >> SizeFieldPos;
}
};
@@ -143,18 +138,12 @@ public:
// To maintain these constraints, live objects should have ((_ref_count & 0x1) == 1),
// which requires that we increment the reference counts by 2 starting at _ref_count = 3.
//
-// When such an object is on a free list, we reuse the same field for linking
-// together those free objects.
-//
// All but inline pointers are of this kind. For those, card entries are stored
-// directly in the CardSetPtr of the ConcurrentHashTable node.
+// directly in the ContainerPtr of the ConcurrentHashTable node.
class G1CardSetContainer {
-private:
- union {
- G1CardSetContainer* _next;
- uintptr_t _ref_count;
- };
-
+ uintptr_t _ref_count;
+protected:
+ ~G1CardSetContainer() = default;
public:
G1CardSetContainer() : _ref_count(3) { }
@@ -166,18 +155,6 @@ public:
// to check the value after attempting to decrement.
uintptr_t decrement_refcount();
- G1CardSetContainer* next() {
- return _next;
- }
-
- G1CardSetContainer** next_addr() {
- return &_next;
- }
-
- void set_next(G1CardSetContainer* next) {
- _next = next;
- }
-
// Log of largest card index that can be stored in any G1CardSetContainer
static uint LogCardsPerRegionLimit;
};
@@ -186,7 +163,7 @@ class G1CardSetArray : public G1CardSetContainer {
public:
typedef uint16_t EntryDataType;
typedef uint EntryCountType;
- using CardSetPtr = G1CardSet::CardSetPtr;
+ using ContainerPtr = G1CardSet::ContainerPtr;
private:
EntryCountType _size;
EntryCountType volatile _num_entries;
@@ -240,7 +217,7 @@ class G1CardSetBitMap : public G1CardSetContainer {
size_t _num_bits_set;
BitMap::bm_word_t _bits[1];
- using CardSetPtr = G1CardSet::CardSetPtr;
+ using ContainerPtr = G1CardSet::ContainerPtr;
template
static size_t header_size_in_bytes_internal() {
@@ -275,10 +252,10 @@ public:
class G1CardSetHowl : public G1CardSetContainer {
public:
typedef uint EntryCountType;
- using CardSetPtr = G1CardSet::CardSetPtr;
+ using ContainerPtr = G1CardSet::ContainerPtr;
EntryCountType volatile _num_entries;
private:
- CardSetPtr _buckets[2];
+ ContainerPtr _buckets[2];
// Do not add class member variables beyond this point
template
@@ -286,32 +263,32 @@ private:
return offset_of(Derived, _buckets);
}
- // Iterates over the given CardSetPtr with at index in this Howl card set,
+ // Iterates over the given ContainerPtr with at index in this Howl card set,
// applying a CardOrRangeVisitor on it.
template
- void iterate_cardset(CardSetPtr const card_set, uint index, CardOrRangeVisitor& found, G1CardSetConfiguration* config);
+ void iterate_cardset(ContainerPtr const container, uint index, CardOrRangeVisitor& found, G1CardSetConfiguration* config);
public:
G1CardSetHowl(EntryCountType card_in_region, G1CardSetConfiguration* config);
- CardSetPtr* get_card_set_addr(EntryCountType index) {
+ ContainerPtr* get_container_addr(EntryCountType index) {
return &_buckets[index];
}
bool contains(uint card_idx, G1CardSetConfiguration* config);
- // Iterates over all CardSetPtrs in this Howl card set, applying a CardOrRangeVisitor
+ // Iterates over all ContainerPtrs in this Howl card set, applying a CardOrRangeVisitor
// on it.
template
void iterate(CardOrRangeVisitor& found, G1CardSetConfiguration* config);
- // Iterates over all CardSetPtrs in this Howl card set. Calls
+ // Iterates over all ContainerPtrs in this Howl card set. Calls
//
- // void operator ()(CardSetPtr* card_set_addr);
+ // void operator ()(ContainerPtr* card_set_addr);
//
// on all of them.
- template
- void iterate(CardSetPtrVisitor& found, uint num_card_sets);
+ template
+ void iterate(ContainerPtrVisitor& found, uint num_card_sets);
static EntryCountType num_buckets(size_t size_in_bits, size_t num_cards_in_array, size_t max_buckets);
@@ -323,7 +300,7 @@ public:
static size_t header_size_in_bytes() { return header_size_in_bytes_internal(); }
static size_t size_in_bytes(size_t num_arrays) {
- return header_size_in_bytes() + sizeof(CardSetPtr) * num_arrays;
+ return header_size_in_bytes() + sizeof(ContainerPtr) * num_arrays;
}
};
diff --git a/src/hotspot/share/gc/g1/g1CardSetContainers.inline.hpp b/src/hotspot/share/gc/g1/g1CardSetContainers.inline.hpp
index df0e43f0d84fc44bd211f22ebcfadb3117bb00fe..3949687a97c2fefabd10abefe252d284564844d4 100644
--- a/src/hotspot/share/gc/g1/g1CardSetContainers.inline.hpp
+++ b/src/hotspot/share/gc/g1/g1CardSetContainers.inline.hpp
@@ -27,9 +27,11 @@
#include "gc/g1/g1CardSetContainers.hpp"
#include "gc/g1/g1GCPhaseTimes.hpp"
+#include "utilities/bitMap.inline.hpp"
#include "utilities/globalDefinitions.hpp"
+#include "utilities/spinYield.hpp"
-inline G1CardSetInlinePtr::CardSetPtr G1CardSetInlinePtr::merge(CardSetPtr orig_value, uint card_in_region, uint idx, uint bits_per_card) {
+inline G1CardSetInlinePtr::ContainerPtr G1CardSetInlinePtr::merge(ContainerPtr orig_value, uint card_in_region, uint idx, uint bits_per_card) {
assert((idx & (SizeFieldMask >> SizeFieldPos)) == idx, "Index %u too large to fit into size field", idx);
assert(card_in_region < ((uint)1 << bits_per_card), "Card %u too large to fit into card value field", card_in_region);
@@ -42,7 +44,7 @@ inline G1CardSetInlinePtr::CardSetPtr G1CardSetInlinePtr::merge(CardSetPtr orig_
uintptr_t value = ((uintptr_t)(idx + 1) << SizeFieldPos) | ((uintptr_t)card_in_region << card_pos);
uintptr_t res = (((uintptr_t)orig_value & ~SizeFieldMask) | value);
- return (CardSetPtr)res;
+ return (ContainerPtr)res;
}
inline G1AddCardResult G1CardSetInlinePtr::add(uint card_idx, uint bits_per_card, uint max_cards_in_inline_ptr) {
@@ -62,8 +64,8 @@ inline G1AddCardResult G1CardSetInlinePtr::add(uint card_idx, uint bits_per_card
if (num_cards >= max_cards_in_inline_ptr) {
return Overflow;
}
- CardSetPtr new_value = merge(_value, card_idx, num_cards, bits_per_card);
- CardSetPtr old_value = Atomic::cmpxchg(_value_addr, _value, new_value, memory_order_relaxed);
+ ContainerPtr new_value = merge(_value, card_idx, num_cards, bits_per_card);
+ ContainerPtr old_value = Atomic::cmpxchg(_value_addr, _value, new_value, memory_order_relaxed);
if (_value == old_value) {
return Added;
}
@@ -71,7 +73,7 @@ inline G1AddCardResult G1CardSetInlinePtr::add(uint card_idx, uint bits_per_card
_value = old_value;
// The value of the pointer may have changed to something different than
// an inline card set. Exit then instead of overwriting.
- if (G1CardSet::card_set_type(_value) != G1CardSet::CardSetInlinePtr) {
+ if (G1CardSet::container_type(_value) != G1CardSet::ContainerInlinePtr) {
return Overflow;
}
}
@@ -266,23 +268,23 @@ inline G1CardSetHowl::G1CardSetHowl(EntryCountType card_in_region, G1CardSetConf
inline bool G1CardSetHowl::contains(uint card_idx, G1CardSetConfiguration* config) {
EntryCountType bucket = config->howl_bucket_index(card_idx);
- CardSetPtr* array_entry = get_card_set_addr(bucket);
- CardSetPtr card_set = Atomic::load_acquire(array_entry);
+ ContainerPtr* array_entry = get_container_addr(bucket);
+ ContainerPtr container = Atomic::load_acquire(array_entry);
- switch (G1CardSet::card_set_type(card_set)) {
- case G1CardSet::CardSetArrayOfCards : {
- return G1CardSet::card_set_ptr(card_set)->contains(card_idx);
+ switch (G1CardSet::container_type(container)) {
+ case G1CardSet::ContainerArrayOfCards: {
+ return G1CardSet::container_ptr(container)->contains(card_idx);
}
- case G1CardSet::CardSetBitMap: {
+ case G1CardSet::ContainerBitMap: {
uint card_offset = config->howl_bitmap_offset(card_idx);
- return G1CardSet::card_set_ptr(card_set)->contains(card_offset, config->max_cards_in_howl_bitmap());
+ return G1CardSet::container_ptr(container)->contains(card_offset, config->max_cards_in_howl_bitmap());
}
- case G1CardSet::CardSetInlinePtr: {
- G1CardSetInlinePtr ptr(card_set);
+ case G1CardSet::ContainerInlinePtr: {
+ G1CardSetInlinePtr ptr(container);
return ptr.contains(card_idx, config->inline_ptr_bits_per_card());
}
- case G1CardSet::CardSetHowl: {// Fullcard set entry
- assert(card_set == G1CardSet::FullCardSet, "Must be");
+ case G1CardSet::ContainerHowl: {// Fullcard set entry
+ assert(container == G1CardSet::FullCardSet, "Must be");
return true;
}
}
@@ -296,38 +298,38 @@ inline void G1CardSetHowl::iterate(CardOrRangeVisitor& found, G1CardSetConfigura
}
}
-template
-inline void G1CardSetHowl::iterate(CardSetPtrVisitor& found, uint num_card_sets) {
+template
+inline void G1CardSetHowl::iterate(ContainerPtrVisitor& found, uint num_card_sets) {
for (uint i = 0; i < num_card_sets; ++i) {
found(&_buckets[i]);
}
}
template
-inline void G1CardSetHowl::iterate_cardset(CardSetPtr const card_set, uint index, CardOrRangeVisitor& found, G1CardSetConfiguration* config) {
- switch (G1CardSet::card_set_type(card_set)) {
- case G1CardSet::CardSetInlinePtr: {
+inline void G1CardSetHowl::iterate_cardset(ContainerPtr const container, uint index, CardOrRangeVisitor& found, G1CardSetConfiguration* config) {
+ switch (G1CardSet::container_type(container)) {
+ case G1CardSet::ContainerInlinePtr: {
if (found.start_iterate(G1GCPhaseTimes::MergeRSHowlInline)) {
- G1CardSetInlinePtr ptr(card_set);
+ G1CardSetInlinePtr ptr(container);
ptr.iterate(found, config->inline_ptr_bits_per_card());
}
return;
}
- case G1CardSet::CardSetArrayOfCards : {
+ case G1CardSet::ContainerArrayOfCards: {
if (found.start_iterate(G1GCPhaseTimes::MergeRSHowlArrayOfCards)) {
- G1CardSet::card_set_ptr(card_set)->iterate(found);
+ G1CardSet::container_ptr(container)->iterate(found);
}
return;
}
- case G1CardSet::CardSetBitMap: {
+ case G1CardSet::ContainerBitMap: {
if (found.start_iterate(G1GCPhaseTimes::MergeRSHowlBitmap)) {
uint offset = index << config->log2_max_cards_in_howl_bitmap();
- G1CardSet::card_set_ptr(card_set)->iterate(found, config->max_cards_in_howl_bitmap(), offset);
+ G1CardSet::container_ptr(container)->iterate(found, config->max_cards_in_howl_bitmap(), offset);
}
return;
}
- case G1CardSet::CardSetHowl: { // actually FullCardSet
- assert(card_set == G1CardSet::FullCardSet, "Must be");
+ case G1CardSet::ContainerHowl: { // actually FullCardSet
+ assert(container == G1CardSet::FullCardSet, "Must be");
if (found.start_iterate(G1GCPhaseTimes::MergeRSHowlFull)) {
uint offset = index << config->log2_max_cards_in_howl_bitmap();
found(offset, config->max_cards_in_howl_bitmap());
diff --git a/src/hotspot/share/gc/g1/g1CardSetMemory.cpp b/src/hotspot/share/gc/g1/g1CardSetMemory.cpp
index 85b4250090af3f1c1d467915274460e355d0b996..b68c50b5cb1b5ff4a6abd68e07e015ac765189ac 100644
--- a/src/hotspot/share/gc/g1/g1CardSetMemory.cpp
+++ b/src/hotspot/share/gc/g1/g1CardSetMemory.cpp
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2021, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2021, 2022, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -30,111 +30,61 @@
#include "runtime/atomic.hpp"
#include "utilities/ostream.hpp"
-template
-G1CardSetAllocator::G1CardSetAllocator(const char* name,
- const G1CardSetAllocOptions* alloc_options,
- G1CardSetFreeList* free_segment_list) :
+G1CardSetAllocator::G1CardSetAllocator(const char* name,
+ const G1CardSetAllocOptions* alloc_options,
+ G1CardSetFreeList* free_segment_list) :
_segmented_array(alloc_options, free_segment_list),
- _transfer_lock(false),
- _free_slots_list(),
- _pending_slots_list(),
- _num_pending_slots(0),
- _num_free_slots(0)
+ _free_slots_list(name, &_segmented_array)
{
uint slot_size = _segmented_array.slot_size();
assert(slot_size >= sizeof(G1CardSetContainer), "Slot instance size %u for allocator %s too small", slot_size, name);
}
-template
-G1CardSetAllocator::~G1CardSetAllocator() {
+G1CardSetAllocator::~G1CardSetAllocator() {
drop_all();
}
-template
-bool G1CardSetAllocator::try_transfer_pending() {
- // Attempt to claim the lock.
- if (Atomic::load_acquire(&_transfer_lock) || // Skip CAS if likely to fail.
- Atomic::cmpxchg(&_transfer_lock, false, true)) {
- return false;
- }
- // Have the lock; perform the transfer.
-
- // Claim all the pending slots.
- G1CardSetContainer* first = _pending_slots_list.pop_all();
-
- if (first != nullptr) {
- // Prepare to add the claimed slots, and update _num_pending_slots.
- G1CardSetContainer* last = first;
- Atomic::load_acquire(&_num_pending_slots);
-
- uint count = 1;
- for (G1CardSetContainer* next = first->next(); next != nullptr; next = next->next()) {
- last = next;
- ++count;
- }
-
- Atomic::sub(&_num_pending_slots, count);
-
- // Wait for any in-progress pops to avoid ABA for them.
- GlobalCounter::write_synchronize();
- // Add synchronized slots to _free_slots_list.
- // Update count first so there can be no underflow in allocate().
- Atomic::add(&_num_free_slots, count);
- _free_slots_list.prepend(*first, *last);
- }
- Atomic::release_store(&_transfer_lock, false);
- return true;
-}
-
-template
-void G1CardSetAllocator::free(Slot* slot) {
+void G1CardSetAllocator::free(void* slot) {
assert(slot != nullptr, "precondition");
- // Desired minimum transfer batch size. There is relatively little
- // importance to the specific number. It shouldn't be too big, else
- // we're wasting space when the release rate is low. If the release
- // rate is high, we might accumulate more than this before being
- // able to start a new transfer, but that's okay. Also note that
- // the allocation rate and the release rate are going to be fairly
- // similar, due to how the slots are used. - kbarret
- uint const trigger_transfer = 10;
-
- uint pending_count = Atomic::add(&_num_pending_slots, 1u, memory_order_relaxed);
-
- G1CardSetContainer* container = reinterpret_cast(reinterpret_cast(slot));
+ _free_slots_list.release(slot);
+}
- container->set_next(nullptr);
- assert(container->next() == nullptr, "precondition");
+void G1CardSetAllocator::drop_all() {
+ _free_slots_list.reset();
+ _segmented_array.drop_all();
+}
- _pending_slots_list.push(*container);
+size_t G1CardSetAllocator::mem_size() const {
+ return sizeof(*this) +
+ _segmented_array.num_segments() * sizeof(G1CardSetSegment) +
+ _segmented_array.num_available_slots() * _segmented_array.slot_size();
+}
- if (pending_count > trigger_transfer) {
- try_transfer_pending();
- }
+size_t G1CardSetAllocator::wasted_mem_size() const {
+ uint num_wasted_slots = _segmented_array.num_available_slots() -
+ _segmented_array.num_allocated_slots() -
+ (uint)_free_slots_list.pending_count();
+ return num_wasted_slots * _segmented_array.slot_size();
}
-template
-void G1CardSetAllocator::drop_all() {
- _free_slots_list.pop_all();
- _pending_slots_list.pop_all();
- _num_pending_slots = 0;
- _num_free_slots = 0;
- _segmented_array.drop_all();
+uint G1CardSetAllocator::num_segments() const {
+ return _segmented_array.num_segments();
}
-template
-void G1CardSetAllocator::print(outputStream* os) {
+void G1CardSetAllocator::print(outputStream* os) {
uint num_allocated_slots = _segmented_array.num_allocated_slots();
uint num_available_slots = _segmented_array.num_available_slots();
uint highest = _segmented_array.first_array_segment() != nullptr
? _segmented_array.first_array_segment()->num_slots()
: 0;
uint num_segments = _segmented_array.num_segments();
+ uint num_pending_slots = (uint)_free_slots_list.pending_count();
os->print("MA " PTR_FORMAT ": %u slots pending (allocated %u available %u) used %.3f highest %u segments %u size %zu ",
p2i(this),
- _num_pending_slots,
+ num_pending_slots,
num_allocated_slots,
num_available_slots,
- percent_of(num_allocated_slots - _num_pending_slots, num_available_slots),
+ percent_of(num_allocated_slots - num_pending_slots, num_available_slots),
highest,
num_segments,
mem_size());
@@ -143,13 +93,13 @@ void G1CardSetAllocator::print(outputStream* os) {
G1CardSetMemoryManager::G1CardSetMemoryManager(G1CardSetConfiguration* config,
G1CardSetFreePool* free_list_pool) : _config(config) {
- _allocators = NEW_C_HEAP_ARRAY(G1CardSetAllocator,
+ _allocators = NEW_C_HEAP_ARRAY(G1CardSetAllocator,
_config->num_mem_object_types(),
mtGC);
for (uint i = 0; i < num_mem_object_types(); i++) {
- new (&_allocators[i]) G1CardSetAllocator(_config->mem_object_type_name_str(i),
- _config->mem_object_alloc_options(i),
- free_list_pool->free_list(i));
+ new (&_allocators[i]) G1CardSetAllocator(_config->mem_object_type_name_str(i),
+ _config->mem_object_alloc_options(i),
+ free_list_pool->free_list(i));
}
}
@@ -167,7 +117,7 @@ G1CardSetMemoryManager::~G1CardSetMemoryManager() {
void G1CardSetMemoryManager::free(uint type, void* value) {
assert(type < num_mem_object_types(), "must be");
- _allocators[type].free((G1CardSetContainer*)value);
+ _allocators[type].free(value);
}
void G1CardSetMemoryManager::flush() {
@@ -188,9 +138,8 @@ size_t G1CardSetMemoryManager::mem_size() const {
for (uint i = 0; i < num_mem_object_types(); i++) {
result += _allocators[i].mem_size();
}
- return sizeof(*this) -
- (sizeof(G1CardSetAllocator) * num_mem_object_types()) +
- result;
+ return sizeof(*this) + result -
+ (sizeof(G1CardSetAllocator) * num_mem_object_types());
}
size_t G1CardSetMemoryManager::wasted_mem_size() const {
diff --git a/src/hotspot/share/gc/g1/g1CardSetMemory.hpp b/src/hotspot/share/gc/g1/g1CardSetMemory.hpp
index a9d235f39e5c509bf6b041876efe37f68faa923b..c2663b41cf1ab8afb6cd9568426aeb3931930a10 100644
--- a/src/hotspot/share/gc/g1/g1CardSetMemory.hpp
+++ b/src/hotspot/share/gc/g1/g1CardSetMemory.hpp
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2021, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2021, 2022, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -29,9 +29,9 @@
#include "gc/g1/g1CardSetContainers.hpp"
#include "gc/g1/g1SegmentedArray.hpp"
#include "gc/g1/g1SegmentedArrayFreePool.hpp"
+#include "gc/shared/freeListAllocator.hpp"
#include "memory/allocation.hpp"
#include "utilities/growableArray.hpp"
-#include "utilities/lockFreeStack.hpp"
class G1CardSetConfiguration;
class outputStream;
@@ -62,52 +62,13 @@ typedef G1SegmentedArraySegment G1CardSetSegment;
typedef G1SegmentedArrayFreeList G1CardSetFreeList;
-// Arena-like allocator for (card set) heap memory objects (Slot slots).
+// Arena-like allocator for (card set) heap memory objects.
//
-// Allocation and deallocation in the first phase on G1CardSetContainer basis
-// may occur by multiple threads at once.
-//
-// Allocation occurs from an internal free list of G1CardSetContainers first,
-// only then trying to bump-allocate from the current G1CardSetSegment. If there is
-// none, this class allocates a new G1CardSetSegment (allocated from the C heap,
-// asking the G1CardSetAllocOptions instance about sizes etc) and uses that one.
-//
-// The SegmentStack free list is a linked list of G1CardSetContainers
-// within all G1CardSetSegment instances allocated so far. It uses a separate
-// pending list and global synchronization to avoid the ABA problem when the
-// user frees a memory object.
-//
-// The class also manages a few counters for statistics using atomic operations.
-// Their values are only consistent within each other with extra global
-// synchronization.
-//
-// Since it is expected that every CardSet (and in extension each region) has its
-// own set of allocators, there is intentionally no padding between them to save
-// memory.
-template
+// Allocation occurs from an internal free list of objects first. If the free list is
+// empty then tries to allocate from the G1SegmentedArray.
class G1CardSetAllocator {
- // G1CardSetSegment management.
-
- typedef G1SegmentedArray SegmentedArray;
- // G1CardSetContainer slot management within the G1CardSetSegments allocated
- // by this allocator.
- static G1CardSetContainer* volatile* next_ptr(G1CardSetContainer& slot);
- typedef LockFreeStack SlotStack;
-
- SegmentedArray _segmented_array;
- volatile bool _transfer_lock;
- SlotStack _free_slots_list;
- SlotStack _pending_slots_list;
-
- volatile uint _num_pending_slots; // Number of slots in the pending list.
- volatile uint _num_free_slots; // Number of slots in the free list.
-
- // Try to transfer slots from _pending_slots_list to _free_slots_list, with a
- // synchronization delay for any in-progress pops from the _free_slots_list
- // to solve ABA here.
- bool try_transfer_pending();
-
- uint num_free_slots() const;
+ G1SegmentedArray _segmented_array;
+ FreeListAllocator _free_slots_list;
public:
G1CardSetAllocator(const char* name,
@@ -115,25 +76,18 @@ public:
G1CardSetFreeList* free_segment_list);
~G1CardSetAllocator();
- Slot* allocate();
- void free(Slot* slot);
+ void* allocate();
+ void free(void* slot);
// Deallocate all segments to the free segment list and reset this allocator. Must
// be called in a globally synchronized area.
void drop_all();
- size_t mem_size() const {
- return sizeof(*this) +
- _segmented_array.num_segments() * sizeof(G1CardSetSegment) + _segmented_array.num_available_slots() *
- _segmented_array.slot_size();
- }
+ size_t mem_size() const;
- size_t wasted_mem_size() const {
- return (_segmented_array.num_available_slots() - (_segmented_array.num_allocated_slots() - _num_pending_slots)) *
- _segmented_array.slot_size();
- }
+ size_t wasted_mem_size() const;
- inline uint num_segments() { return _segmented_array.num_segments(); }
+ uint num_segments() const;
void print(outputStream* os);
};
@@ -143,7 +97,7 @@ typedef G1SegmentedArrayFreePool G1CardSetFreePool;
class G1CardSetMemoryManager : public CHeapObj {
G1CardSetConfiguration* _config;
- G1CardSetAllocator* _allocators;
+ G1CardSetAllocator* _allocators;
uint num_mem_object_types() const;
public:
diff --git a/src/hotspot/share/gc/g1/g1CardSetMemory.inline.hpp b/src/hotspot/share/gc/g1/g1CardSetMemory.inline.hpp
index 21a509f449d82d5c73f4d7844881d2180b8a35a1..bdf69e227dfc661f3af4351ce6c45a6a8218d031 100644
--- a/src/hotspot/share/gc/g1/g1CardSetMemory.inline.hpp
+++ b/src/hotspot/share/gc/g1/g1CardSetMemory.inline.hpp
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2021, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2021, 2022, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -26,37 +26,13 @@
#define SHARE_GC_G1_G1CARDSETMEMORY_INLINE_HPP
#include "gc/g1/g1CardSetMemory.hpp"
-#include "gc/g1/g1CardSetContainers.hpp"
-#include "gc/g1/g1SegmentedArray.inline.hpp"
-#include "utilities/ostream.hpp"
-
#include "gc/g1/g1CardSetContainers.inline.hpp"
+#include "gc/g1/g1SegmentedArray.inline.hpp"
#include "utilities/globalCounter.inline.hpp"
+#include "utilities/ostream.hpp"
-template
-G1CardSetContainer* volatile* G1CardSetAllocator::next_ptr(G1CardSetContainer& slot) {
- return slot.next_addr();
-}
-
-template
-Slot* G1CardSetAllocator::allocate() {
- assert(_segmented_array.slot_size() > 0, "instance size not set.");
-
- if (num_free_slots() > 0) {
- // Pop under critical section to deal with ABA problem
- // Other solutions to the same problem are more complicated (ref counting, HP)
- GlobalCounter::CriticalSection cs(Thread::current());
-
- G1CardSetContainer* container = _free_slots_list.pop();
- if (container != nullptr) {
- Slot* slot = reinterpret_cast(reinterpret_cast(container));
- Atomic::sub(&_num_free_slots, 1u);
- guarantee(is_aligned(slot, 8), "result " PTR_FORMAT " not aligned", p2i(slot));
- return slot;
- }
- }
-
- Slot* slot = _segmented_array.allocate();
+inline void* G1CardSetAllocator::allocate() {
+ void* slot = _free_slots_list.allocate();
assert(slot != nullptr, "must be");
return slot;
}
@@ -74,9 +50,4 @@ inline void G1CardSetMemoryManager::free_node(void* value) {
free(0, value);
}
-template
-inline uint G1CardSetAllocator::num_free_slots() const {
- return Atomic::load(&_num_free_slots);
-}
-
#endif // SHARE_GC_G1_G1CARDSETMEMORY_INLINE_HPP
diff --git a/src/hotspot/share/gc/g1/g1CollectedHeap.cpp b/src/hotspot/share/gc/g1/g1CollectedHeap.cpp
index cbd13a7f282e8096c44bd2de818c17a183ef1cde..3b6458b9f82b81f4131c7b80fb3953b33d0cb81a 100644
--- a/src/hotspot/share/gc/g1/g1CollectedHeap.cpp
+++ b/src/hotspot/share/gc/g1/g1CollectedHeap.cpp
@@ -3313,6 +3313,13 @@ HeapRegion* G1CollectedHeap::alloc_highest_free_region() {
return NULL;
}
+void G1CollectedHeap::mark_evac_failure_object(const oop obj, uint worker_id) const {
+ // All objects failing evacuation are live. What we'll do is
+ // that we'll update the prev marking info so that they are
+ // all under PTAMS and explicitly marked.
+ _cm->par_mark_in_prev_bitmap(obj);
+}
+
// Optimized nmethod scanning
class RegisterNMethodOopClosure: public OopClosure {
diff --git a/src/hotspot/share/gc/g1/g1CollectedHeap.hpp b/src/hotspot/share/gc/g1/g1CollectedHeap.hpp
index 90a4dd05b217b3c814314b30c01b068cc8b51c28..de2442c7ce57588fa70feccb6129180b7bd3bd2e 100644
--- a/src/hotspot/share/gc/g1/g1CollectedHeap.hpp
+++ b/src/hotspot/share/gc/g1/g1CollectedHeap.hpp
@@ -28,6 +28,7 @@
#include "gc/g1/g1BarrierSet.hpp"
#include "gc/g1/g1BiasedArray.hpp"
#include "gc/g1/g1CardTable.hpp"
+#include "gc/g1/g1CardSet.hpp"
#include "gc/g1/g1CollectionSet.hpp"
#include "gc/g1/g1CollectorState.hpp"
#include "gc/g1/g1ConcurrentMark.hpp"
@@ -1178,7 +1179,7 @@ public:
size_t max_tlab_size() const override;
size_t unsafe_max_tlab_alloc(Thread* ignored) const override;
- inline bool is_in_young(const oop obj);
+ inline bool is_in_young(const oop obj) const;
// Returns "true" iff the given word_size is "very large".
static bool is_humongous(size_t word_size) {
@@ -1248,7 +1249,7 @@ public:
inline bool is_obj_dead_full(const oop obj) const;
// Mark the live object that failed evacuation in the prev bitmap.
- inline void mark_evac_failure_object(const oop obj, uint worker_id) const;
+ void mark_evac_failure_object(const oop obj, uint worker_id) const;
G1ConcurrentMark* concurrent_mark() const { return _cm; }
diff --git a/src/hotspot/share/gc/g1/g1CollectedHeap.inline.hpp b/src/hotspot/share/gc/g1/g1CollectedHeap.inline.hpp
index 9cdf6c35432a6fcd42bc96171f012eb38e5b8654..0cd8de23e56871578153d4d58547f43def5bdb40 100644
--- a/src/hotspot/share/gc/g1/g1CollectedHeap.inline.hpp
+++ b/src/hotspot/share/gc/g1/g1CollectedHeap.inline.hpp
@@ -29,7 +29,6 @@
#include "gc/g1/g1BarrierSet.hpp"
#include "gc/g1/g1CollectorState.hpp"
-#include "gc/g1/g1ConcurrentMark.inline.hpp"
#include "gc/g1/g1EvacFailureRegions.hpp"
#include "gc/g1/g1Policy.hpp"
#include "gc/g1/g1RemSet.hpp"
@@ -208,7 +207,7 @@ void G1CollectedHeap::register_optional_region_with_region_attr(HeapRegion* r) {
_region_attr.set_optional(r->hrm_index(), r->rem_set()->is_tracked());
}
-inline bool G1CollectedHeap::is_in_young(const oop obj) {
+inline bool G1CollectedHeap::is_in_young(const oop obj) const {
if (obj == NULL) {
return false;
}
@@ -234,13 +233,6 @@ inline bool G1CollectedHeap::is_obj_dead_full(const oop obj) const {
return is_obj_dead_full(obj, heap_region_containing(obj));
}
-inline void G1CollectedHeap::mark_evac_failure_object(const oop obj, uint worker_id) const {
- // All objects failing evacuation are live. What we'll do is
- // that we'll update the prev marking info so that they are
- // all under PTAMS and explicitly marked.
- _cm->par_mark_in_prev_bitmap(obj);
-}
-
inline void G1CollectedHeap::set_humongous_reclaim_candidate(uint region, bool value) {
assert(_hrm.at(region)->is_starts_humongous(), "Must start a humongous object");
_humongous_reclaim_candidates.set_candidate(region, value);
diff --git a/src/hotspot/share/gc/g1/g1EvacFailureRegions.cpp b/src/hotspot/share/gc/g1/g1EvacFailureRegions.cpp
index 855c549cd04800ddac3535568d84bccad96bfeb3..a67fb06a333d68d8774686c82e3742ccf9e188e2 100644
--- a/src/hotspot/share/gc/g1/g1EvacFailureRegions.cpp
+++ b/src/hotspot/share/gc/g1/g1EvacFailureRegions.cpp
@@ -29,6 +29,7 @@
#include "gc/g1/heapRegion.hpp"
#include "memory/allocation.hpp"
#include "runtime/atomic.hpp"
+#include "utilities/bitMap.inline.hpp"
G1EvacFailureRegions::G1EvacFailureRegions() :
_regions_failed_evacuation(mtGC),
diff --git a/src/hotspot/share/gc/g1/g1FullGCMarker.cpp b/src/hotspot/share/gc/g1/g1FullGCMarker.cpp
index bf0b49dd8283a15ec672c863677ed86ecd6af4d2..398b904fdb2eca27caf4ac9880dc355aed13e2b9 100644
--- a/src/hotspot/share/gc/g1/g1FullGCMarker.cpp
+++ b/src/hotspot/share/gc/g1/g1FullGCMarker.cpp
@@ -55,7 +55,7 @@ void G1FullGCMarker::complete_marking(OopQueueSet* oop_stacks,
ObjArrayTaskQueueSet* array_stacks,
TaskTerminator* terminator) {
do {
- drain_stack();
+ follow_marking_stacks();
ObjArrayTask steal_array;
if (array_stacks->steal(_worker_id, steal_array)) {
follow_array_chunk(objArrayOop(steal_array.obj()), steal_array.index());
diff --git a/src/hotspot/share/gc/g1/g1FullGCMarker.hpp b/src/hotspot/share/gc/g1/g1FullGCMarker.hpp
index 2d935d863c5debe86f7fab6b57215ba20a2cc4a5..ee77e5044fc4bdb81b354db37e9f1598254435fe 100644
--- a/src/hotspot/share/gc/g1/g1FullGCMarker.hpp
+++ b/src/hotspot/share/gc/g1/g1FullGCMarker.hpp
@@ -79,12 +79,12 @@ class G1FullGCMarker : public CHeapObj {
inline void follow_array(objArrayOop array);
inline void follow_array_chunk(objArrayOop array, int index);
- inline void drain_oop_stack();
- // Transfer contents from the objArray task queue overflow stack to the shared
- // objArray stack.
+ inline void publish_and_drain_oop_tasks();
+ // Try to publish all contents from the objArray task queue overflow stack to
+ // the shared objArray stack.
// Returns true and a valid task if there has not been enough space in the shared
- // objArray stack, otherwise the task is invalid.
- inline bool transfer_objArray_overflow_stack(ObjArrayTask& task);
+ // objArray stack, otherwise returns false and the task is invalid.
+ inline bool publish_or_pop_objarray_tasks(ObjArrayTask& task);
public:
G1FullGCMarker(G1FullCollector* collector,
@@ -103,7 +103,7 @@ public:
inline void follow_klass(Klass* k);
inline void follow_cld(ClassLoaderData* cld);
- inline void drain_stack();
+ inline void follow_marking_stacks();
void complete_marking(OopQueueSet* oop_stacks,
ObjArrayTaskQueueSet* array_stacks,
TaskTerminator* terminator);
diff --git a/src/hotspot/share/gc/g1/g1FullGCMarker.inline.hpp b/src/hotspot/share/gc/g1/g1FullGCMarker.inline.hpp
index a68bae5ced64f934b6bb7f5c22d67bc1bf81eae4..256fd766f82cf852efe477bbfae2cc2b50165044 100644
--- a/src/hotspot/share/gc/g1/g1FullGCMarker.inline.hpp
+++ b/src/hotspot/share/gc/g1/g1FullGCMarker.inline.hpp
@@ -151,7 +151,7 @@ inline void G1FullGCMarker::follow_object(oop obj) {
}
}
-inline void G1FullGCMarker::drain_oop_stack() {
+inline void G1FullGCMarker::publish_and_drain_oop_tasks() {
oop obj;
while (_oop_stack.pop_overflow(obj)) {
if (!_oop_stack.try_push_to_taskqueue(obj)) {
@@ -165,7 +165,7 @@ inline void G1FullGCMarker::drain_oop_stack() {
}
}
-inline bool G1FullGCMarker::transfer_objArray_overflow_stack(ObjArrayTask& task) {
+inline bool G1FullGCMarker::publish_or_pop_objarray_tasks(ObjArrayTask& task) {
// It is desirable to move as much as possible work from the overflow queue to
// the shared queue as quickly as possible.
while (_objarray_stack.pop_overflow(task)) {
@@ -176,15 +176,15 @@ inline bool G1FullGCMarker::transfer_objArray_overflow_stack(ObjArrayTask& task)
return false;
}
-void G1FullGCMarker::drain_stack() {
+void G1FullGCMarker::follow_marking_stacks() {
do {
// First, drain regular oop stack.
- drain_oop_stack();
+ publish_and_drain_oop_tasks();
// Then process ObjArrays one at a time to avoid marking stack bloat.
ObjArrayTask task;
- if (transfer_objArray_overflow_stack(task) ||
- _objarray_stack.pop_local(task)) {
+ if (publish_or_pop_objarray_tasks(task) ||
+ _objarray_stack.pop_local(task)) {
follow_array_chunk(objArrayOop(task.obj()), task.index());
}
} while (!is_empty());
diff --git a/src/hotspot/share/gc/g1/g1FullGCOopClosures.cpp b/src/hotspot/share/gc/g1/g1FullGCOopClosures.cpp
index ba3b50e5a963ee126e60b406fe232eb03dd6a8db..eebfa814ae20ae7e7fe70ddea205569db6a01968 100644
--- a/src/hotspot/share/gc/g1/g1FullGCOopClosures.cpp
+++ b/src/hotspot/share/gc/g1/g1FullGCOopClosures.cpp
@@ -36,7 +36,7 @@
G1IsAliveClosure::G1IsAliveClosure(G1FullCollector* collector) :
G1IsAliveClosure(collector, collector->mark_bitmap()) { }
-void G1FollowStackClosure::do_void() { _marker->drain_stack(); }
+void G1FollowStackClosure::do_void() { _marker->follow_marking_stacks(); }
void G1FullKeepAliveClosure::do_oop(oop* p) { do_oop_work(p); }
void G1FullKeepAliveClosure::do_oop(narrowOop* p) { do_oop_work(p); }
diff --git a/src/hotspot/share/gc/g1/g1Policy.cpp b/src/hotspot/share/gc/g1/g1Policy.cpp
index d41ffcbee7b2890e5d0d73ab17fa393f07c06953..6de90942ff5a76cc57fa6ae2de616bc6bd57059c 100644
--- a/src/hotspot/share/gc/g1/g1Policy.cpp
+++ b/src/hotspot/share/gc/g1/g1Policy.cpp
@@ -1313,7 +1313,6 @@ void G1Policy::calculate_old_collection_set_regions(G1CollectionSetCandidates* c
num_optional_regions = 0;
uint num_expensive_regions = 0;
- double predicted_old_time_ms = 0.0;
double predicted_initial_time_ms = 0.0;
double predicted_optional_time_ms = 0.0;
@@ -1344,7 +1343,7 @@ void G1Policy::calculate_old_collection_set_regions(G1CollectionSetCandidates* c
time_remaining_ms = MAX2(time_remaining_ms - predicted_time_ms, 0.0);
// Add regions to old set until we reach the minimum amount
if (num_initial_regions < min_old_cset_length) {
- predicted_old_time_ms += predicted_time_ms;
+ predicted_initial_time_ms += predicted_time_ms;
num_initial_regions++;
// Record the number of regions added with no time remaining
if (time_remaining_ms == 0.0) {
@@ -1358,7 +1357,7 @@ void G1Policy::calculate_old_collection_set_regions(G1CollectionSetCandidates* c
} else {
// Keep adding regions to old set until we reach the optional threshold
if (time_remaining_ms > optional_threshold_ms) {
- predicted_old_time_ms += predicted_time_ms;
+ predicted_initial_time_ms += predicted_time_ms;
num_initial_regions++;
} else if (time_remaining_ms > 0) {
// Keep adding optional regions until time is up.
@@ -1382,7 +1381,7 @@ void G1Policy::calculate_old_collection_set_regions(G1CollectionSetCandidates* c
}
log_debug(gc, ergo, cset)("Finish choosing collection set old regions. Initial: %u, optional: %u, "
- "predicted old time: %1.2fms, predicted optional time: %1.2fms, time remaining: %1.2f",
+ "predicted initial time: %1.2fms, predicted optional time: %1.2fms, time remaining: %1.2f",
num_initial_regions, num_optional_regions,
predicted_initial_time_ms, predicted_optional_time_ms, time_remaining_ms);
}
diff --git a/src/hotspot/share/gc/g1/g1SegmentedArray.hpp b/src/hotspot/share/gc/g1/g1SegmentedArray.hpp
index 8091b6b2c8b4206eaeee9f5fdf3c489e8610eb9b..6c73e9855cc4ec7097e05a341e3a63f604664ebc 100644
--- a/src/hotspot/share/gc/g1/g1SegmentedArray.hpp
+++ b/src/hotspot/share/gc/g1/g1SegmentedArray.hpp
@@ -26,6 +26,7 @@
#ifndef SHARE_GC_G1_G1SEGMENTEDARRAY_HPP
#define SHARE_GC_G1_G1SEGMENTEDARRAY_HPP
+#include "gc/shared/freeListAllocator.hpp"
#include "memory/allocation.hpp"
#include "utilities/lockFreeStack.hpp"
@@ -180,8 +181,8 @@ public:
// The class also manages a few counters for statistics using atomic operations.
// Their values are only consistent within each other with extra global
// synchronization.
-template
-class G1SegmentedArray {
+template
+class G1SegmentedArray : public FreeListConfig {
// G1SegmentedArrayAllocOptions provides parameters for allocation segment
// sizing and expansion.
const G1SegmentedArrayAllocOptions* _alloc_options;
@@ -222,7 +223,10 @@ public:
// be called in a globally synchronized area.
void drop_all();
- inline Slot* allocate();
+ inline void* allocate() override;
+
+ // We do not deallocate individual slots
+ inline void deallocate(void* node) override { ShouldNotReachHere(); }
inline uint num_segments() const;
diff --git a/src/hotspot/share/gc/g1/g1SegmentedArray.inline.hpp b/src/hotspot/share/gc/g1/g1SegmentedArray.inline.hpp
index 17be55fe6abeca85ff1cfbb51689f9cfffcd253a..69c3526e58d81686cd18d91495a7b4d1dc7bc95a 100644
--- a/src/hotspot/share/gc/g1/g1SegmentedArray.inline.hpp
+++ b/src/hotspot/share/gc/g1/g1SegmentedArray.inline.hpp
@@ -115,8 +115,8 @@ void G1SegmentedArrayFreeList::free_all() {
Atomic::sub(&_mem_size, mem_size_freed, memory_order_relaxed);
}
-template
-G1SegmentedArraySegment* G1SegmentedArray::create_new_segment(G1SegmentedArraySegment* const prev) {
+template
+G1SegmentedArraySegment* G1SegmentedArray::create_new_segment(G1SegmentedArraySegment* const prev) {
// Take an existing segment if available.
G1SegmentedArraySegment* next = _free_segment_list->get();
if (next == nullptr) {
@@ -125,7 +125,7 @@ G1SegmentedArraySegment* G1SegmentedArray::create_new_segment(
next = new G1SegmentedArraySegment(slot_size(), num_slots, prev);
} else {
assert(slot_size() == next->slot_size() ,
- "Mismatch %d != %d Slot %zu", slot_size(), next->slot_size(), sizeof(Slot));
+ "Mismatch %d != %d", slot_size(), next->slot_size());
next->reset(prev);
}
@@ -148,14 +148,14 @@ G1SegmentedArraySegment* G1SegmentedArray::create_new_segment(
}
}
-template
-uint G1SegmentedArray::slot_size() const {
+template
+uint G1SegmentedArray::slot_size() const {
return _alloc_options->slot_size();
}
-template
-G1SegmentedArray::G1SegmentedArray(const G1SegmentedArrayAllocOptions* alloc_options,
- G1SegmentedArrayFreeList* free_segment_list) :
+template
+G1SegmentedArray::G1SegmentedArray(const G1SegmentedArrayAllocOptions* alloc_options,
+ G1SegmentedArrayFreeList* free_segment_list) :
_alloc_options(alloc_options),
_first(nullptr),
_last(nullptr),
@@ -167,13 +167,13 @@ G1SegmentedArray::G1SegmentedArray(const G1SegmentedArrayAllocOption
assert(_free_segment_list != nullptr, "precondition!");
}
-template
-G1SegmentedArray::~G1SegmentedArray() {
+template
+G1SegmentedArray::~G1SegmentedArray() {
drop_all();
}
-template
-void G1SegmentedArray::drop_all() {
+template
+void G1SegmentedArray::drop_all() {
G1SegmentedArraySegment* cur = Atomic::load_acquire(&_first);
if (cur != nullptr) {
@@ -209,8 +209,8 @@ void G1SegmentedArray::drop_all() {
_num_allocated_slots = 0;
}
-template
-Slot* G1SegmentedArray::allocate() {
+template
+void* G1SegmentedArray::allocate() {
assert(slot_size() > 0, "instance size not set.");
G1SegmentedArraySegment* cur = Atomic::load_acquire(&_first);
@@ -219,7 +219,7 @@ Slot* G1SegmentedArray::allocate() {
}
while (true) {
- Slot* slot = (Slot*)cur->get_new_slot();
+ void* slot = cur->get_new_slot();
if (slot != nullptr) {
Atomic::inc(&_num_allocated_slots, memory_order_relaxed);
guarantee(is_aligned(slot, _alloc_options->slot_alignment()),
@@ -232,8 +232,8 @@ Slot* G1SegmentedArray::allocate() {
}
}
-template