(PDF) Emerging Technologies in Digital Manufacturing and Smart Factories
Timely OpenJDK Migration For Efficient Java Run
Alexander Belokrylov is CEO of BellSoft. Accomplished expert in Java technology and IT leadership with over 10 years of experience.
gettySoftware applications are today's basic operations layer for any corporation, and Java remains the dominant code for enterprise development.
Java is a programming language that stands out from the rest. Its core principle of "write once, run anywhere," garbage collection, robust security and scalability, among others, make it the first choice for large, complex enterprise applications.
To ensure Java workloads are modern and efficient, there is a simple rule to follow—regular Java upgrades according to the LTS Java sequence. LTS stands for "long-term support" Java releases, which are issued every two to three years. Enterprises primarily use LTS releases for development purposes. However, regular Java migration to newer LTS versions is not standard business practice. Java migration often becomes a complex business issue, resulting in an extended stay on current Java versions—reasons for which I have recently uncovered.
This article looks at the migration issue from a different angle: the downside of delayed Java migration.
Each new Java LTS release brings enhanced technological capacities and improvements to its existing features. When missing out on a Java upgrade, you are falling behind in modern Java's features. The main disadvantages and restrictions when staying on mature Java versions are security risks, a rise in operational costs and outdated functionalities.
Security RisksSecurity is always a priority for IT. With the growing number of CVEs identified each year and much of the code arriving from open source, timely upgrades are a must.
Java-based applications risk security issues arriving from related libraries and frameworks we use in development. To be on the right side of business operations, you need to strictly follow updates that deliver security patches, upgrades and other enhancements recently released by a vendor.
Older LTS releases get their support discontinued at some point. Oracle no longer supports Java 8, and you should refer to an alternative OpenJDK vendor. Only a few of them still offer such service.
Lack Of Modern FeaturesKeep in mind that while staying on the current Java version, you do not get the enhanced functionalities available in a new release. Using a version of Java that lacks modern features disappoints developers, and you might face difficulty in finding skilled professional developers who want to work with old releases.
Mature Java releases do not provide the coding flexibility that newer releases bring, especially when these features are relevant to cloud-based applications. Building a cloud-based Java application on a mature LTS release is tricky and more expensive in operations.
Coding practice is not a straightforward process. Often, we add new features and algorithms already in the process, and they might require newer Java versions. Missing modern Java features results in a deficiency in your application's potential for escalation and complexity.
This problem runs deeper with the most mature LTS versions. The greater the gap between the current Java workloads and the latest LTS release, the more severe restrictions your application will have to carry. For instance, the support of some libraries and features is discontinued with time, so staying on legacy releases for too long means losing access to some critical Java components.
In addition, continuing to run software on mature LTS releases is unsatisfactory for the customer. We're used to fast and convenient online customer journeys. Mature Java versions lack this capability and efficiency.
Last, but not least, software applications contain many interconnected components. This means that without a timely Java upgrade, you may not be able to upgrade the other elements of the workload. Overstaying on mature Java releases limits not only Java functionalities but also the usage of other useful frameworks and tools.
Rising Operational CostsThe cost of operation on older Java releases increases for several reasons. You may require special commercial support and extra features for your Java workloads to keep up with the modern environment. Such support can arrive as an expensive line item in your budget.
Applications running on legacy Java releases are less efficient than those running on modern releases, and they cost more to run. For example, applications running on JDK 8 may consume more RAM, consequently raising your cloud budget. With the ever-increasing volume of data and information exchange, this issue is more important than you might think.
Timely Migration As A Long-Term Impact On The Overall State Of Java EcosystemJava is evolving as fast as the rest of the IT industry, and it is a delusion to consider only disruptive technology as a top development priority. Skipping the migration stages leaves your Java workloads behind, limiting modern IT opportunities for the enterprise and raising cost and security concerns.
Assessing the business function and value of Java applications is a critical aspect of application portfolio appraisal. While applications serve business goals, business requirements change over time, and the applications should follow these changes. Putting a stop to regular migration to new LTS releases creates a gap in functionality and customer expectations.
The best recommendation is to follow timely Java updates to ensure your organization is ahead in its competitive technological advancements and fully meets today's business needs. Consider arranging this process through a single administrative Java center that allows automated continuous updates, keeping all licenses up to date, ensuring security and easing administration of complex enterprise workloads often based on different OpenJDK runtimes.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?
Java News Roundup: WildFly 34, Stream Gatherers, Oracle CPU, Quarkiverse Release Process
This week's Java roundup for October 14th, 2024, features news highlighting: the release of WildFly 34; JEP 485, Stream Gatherers, proposed to target for JDK 24; Oracle Critical Patch Update for October 2024; and a potential leak in the SmallRye and Quarkiverse release processes.
OpenJDKJEP 485, Stream Gatherers, has been promoted from Candidate to Proposed to Target for JDK 24. This JEP proposes to finalize this feature after two rounds of preview, namely: JEP 473: Stream Gatherers (Second Preview), delivered in JDK 23; and JEP 461, Stream Gatherers (Preview), delivered in JDK 22. This feature was designed to enhance the Stream API to support custom intermediate operations that will "allow stream pipelines to transform data in ways that are not easily achievable with the existing built-in intermediate operations." More details on this JEP may be found in the original design document and this InfoQ news story. The review is expected to conclude on October 23, 2024.
Oracle has released versions 23.0.1, 21.0.5, 17.0.13, 11.0.25, and 8u431 of the JDK as part of the quarterly Critical Patch Update Advisory for October 2024. More details on this release may be found in the release notes for version 23.0.1, version 21.0.5, version 17.0.13, version 11.0.25 and version 8u431.
Version 7.5.0 of the Regression Test Harness for the JDK, jtreg, has been released and ready for integration in the JDK. The most significant changes include: the restoration of the jtdiff tool; and support for a LIBRARY.Properties file located in the directory specified in the @library tag and read when jtreg compiles classes in that library. There was also a dependency upgrade to JUnit 5.11.0. Further details on this release may be found in the release notes.
JDK 24Build 20 of the JDK 24 early-access builds was made available this past week featuring updates from Build 19 that include fixes for various issues. Further details on this release may be found in the release notes.
For JDK 24, developers are encouraged to report bugs via the Java Bug Database.
Jakarta EE 11In his weekly Hashtag Jakarta EE blog, Ivar Grimstad, Jakarta EE developer advocate at the Eclipse Foundation, provided an update on Jakarta EE 11, writing:
GlassFish now passes 84% of the tests in the refactored TCK for Jakarta EE 11. The remaining tests are mainly related to the Application Client Container. The Jakarta EE Platform Project is proposing to deprecate the Application Container in Jakarta EE 12. There are ongoing discussions about how much importance these tests should be given to Jakarta EE 11.
The Jakarta EE 11 Core Profile TCK has been staged, and both Open Liberty and WildFly are passing (or very close to passing) it. So it looks like we will be able to release Jakarta EE 11 Core Profile ahead of Jakarta EE 11 Platform and Jakarta EE 11 Web Profile.
The road to Jakarta EE 11 included four milestone releases with the potential for release candidates as necessary before the GA release in 4Q2024.
BellSoftConcurrent with Oracle's Critical Patch Update (CPU) for October 2024, BellSoft has released CPU patches for versions 21.0.4.0.1, 17.0.12.0.1, 11.0.24.0.1, 8u431, 7u441 and 6u441 of Liberica JDK, their downstream distribution of OpenJDK, to address this list of CVEs. In addition, Patch Set Update (PSU) versions 23.0.1, 21.0.5, 17.0.13, 11.0.25 and 8u432, containing CPU and non-critical fixes, have also been released.
With an overall total of 1169 fixes and backports, BellSoft states that they have participated in eliminating 18 issues in all releases.
Spring FrameworkThe second release candidate of Spring Framework 6.2.0 delivers bug fixes, improvements in documentation, dependency upgrades and many new features such as: a rename of the OverrideMetadata class to BeanOverrideHandler to align with the existing naming convention of the other classes, interfaces and annotations defined in the org.Springframework.Test.Context.Bean.Override package; and the addition of the messageConverters() method to the RestClient.Builder interface to allow setting converters of the RestClient interface without initializing the default one. This version will be included in the upcoming release of Spring Boot 3.4.0-RC1. More details on this release may be found in the release notes.
Similarly, the release of version 6.1.14 of Spring Framework also provides bug fixes, improvements in documentation, dependency upgrades and new features such as: the removal of support for relative paths in the ResourceHandlerUtils class that eliminates security issues; and ensure proper exception handling from the isCorsRequest() method, defined in the CorsUtils class, upon encountering a malformed Origin header. This version will be included in the upcoming releases of Spring Boot and 3.3.5 and 3.2.11. More details on this release may be found in the release notes.
The Spring Framework team has also disclosed two Common Vulnerabilities and Exposures (CVEs):
These CVEs affect Spring Framework versions 5.3.0 - 5.3.40, 6.0.0 - 6.0.24 and 6.1.0 - 6.1.13.
The first release candidate of Spring Data 2024.1.0 delivers expanded support for Spring Data Value Expressions where property-placeholders may be leveraged in repository query methods annotated with @Query. There were also updates to sub-projects such as: Spring Data Commons 3.4.0-RC1, Spring Data MongoDB 4.4.0-RC1, Spring Data Elasticsearch 5.4.0-RC1 and Spring Data Neo4j 7.4.0-RC1. More details on this release may be found in the release notes.
Similarly, the release of Spring Data 2024.0.5 and 2023.1.11 ship with bug fixes and respective dependency upgrades to sub-projects such as: Spring Data Commons 3.3.5 and 3.2.11; Spring Data MongoDB 4.3.5 and 4.2.11; Spring Data Elasticsearch 5.3.5 and 5.2.11; and Spring Data Neo4j 7.3.5 and 7.2.11. These versions will be included in the upcoming releases of Spring Boot and 3.3.5 and 3.2.11.
WildFlyThe release of WildFly 34 primarily focuses on WildFly Preview, a technical preview variant of the WildFly server. New features include: support for Jakarta Data 1.0, MicroProfile Rest Client 4.0 and MicroProfile Telemetry 2.0; a new Bill of Materials for WildFly Preview; and four new system properties (backlog, connection-high-water, connection-low-water and no-request-timeout) for configuration in the HTTP management interface. More details on this release may be found in the release notes. InfoQ will follow up with a more detailed news story.
QuarkusThe Quarkus team has disclosed that they recently discovered a potential leak in their Quarkiverse and SmallRye release processes and reported that there was no damage.
Clement Escoffier, Distinguished Engineer at Red Hat, summarized the issue, writing:
We've uncovered a security flaw in the release process for Quarkiverse and SmallRye that could have allowed malicious actors to impersonate projects and publish compromised artifacts.
We've implemented a new, more secure release pipeline to address this. If you're a maintainer, you've received a pull request to migrate to the new process. Quarkus itself is not affected by this issue, only SmallRye and Quarkiverse.
As a result, they have implemented a more secure release process and wanted to share the details with the Java community. InfoQ will follow up with a more detailed news story.
MicrometerThe first release candidate of Micrometer Metrics 1.14.0 provides bug fixes, improvements in documentation, dependency upgrades and new features such as: expose an instance of the TestObservationRegistry class via the assertThat() method from the AssertJ Assertions class; expand metrics to include virtual threads data; and improved performance with the initialization of the Tags class from already sorted array of unique tags. More details on this release may be found in the release notes.
Similarly, versions 1.13.6 and 1.12.11 of Micrometer Metrics also feature bug fixes, improvements in documentation and a new feature that improves the memory usage of the StepBucketHistogram class by eliminating an internal field of the buckets that can be acquired from an instance of the FixedBoundaryHistogram class when needed. Further details on these releases may be found in the release notes for version 1.13.6 and version 1.12.11.
The first release candidate of Micrometer Tracing 1.4.0 ships with dependency upgrades and new features: support for list values in tags in the Span and SpanCustomizer interfaces; and make the OtelSpan class public instead of private to eliminate use of reflection to act upon the underlying OpenTelemetry Span interface. More details on this release may be found in the release notes.
Similarly, version 1.3.5 and 1.2.11 of Micrometer Tracing 1.4.0 simply provide dependency upgrades. Further details on these releases may be found in the release notes for version 1.3.5 and version 1.2.11.
Project ReactorThe first release candidate of Project Reactor 2024.0.0 provides dependency upgrades to reactor-core 3.7.0-RC1, reactor-netty 1.2.0-RC1, reactor-pool 1.1.0-RC1, reactor-addons 3.6.0-RC1, reactor-kotlin-extensions 1.3.0-RC1 and reactor-kafka 1.4.0-RC1. Based on the Spring Calendar, it is anticipated that the GA version of Project 2024.0.0 will be released in November 2024. Further details on this release may be found in the changelog.
Next, Project Reactor 2023.0.11, the eleventh maintenance release, provides dependency upgrades to reactor-core 3.6.11 and reactor-netty 1.1.23. There was also a realignment to version 2023.0.11 with the reactor-pool 1.0.8, reactor-addons 3.5.2, reactor-kotlin-extensions 1.2.3 and reactor-kafka 1.3.23 artifacts that remain unchanged. More details on this release may be found in the changelog.
Piranha CloudThe release of Piranha 24.10.0 delivers bug fixes and notable changes such as: ensure that an instance of the Eclipse Jersey InjecteeSkippingAnalyzer class is installed when needed; and use of the Java PrintStream class or the isWriterAcquired() method, defined in the in the DefaultWebApplicationResponse class, in the DefaultServletRequestDispatcher class as a response to a top-level exception. Further details on this release may be found in their documentation and issue tracker.
Apache Software FoundationThe third milestone release of Apache TomEE 10.0.0 provides bugs fixes, dependency upgrades and new features such as: an improved import of data sources and entity managers that obsoletes the use of the ImportSql class; and a new RequestNotActiveException class, that replaces throwing a NullPointerException, when an instance of a Jakarta Servlet HttpServletRequest is invoked on a thread with no active servlet request. More details on this release may be found in the release notes.
JobRunrThe release of JobRunr 7.3.1 provides new features such as: an instance of the JobDetails class is now cacheable when injecting an interface instead of an implementation; and an enhanced JobRunr Dashboard that includes tips for diagnosing severe JobRunr exceptions for improved clarity of notifications. Further details on this release may be found in the release notes.
KeycloakKeycloak 26.0.1 has been released with bug fixes and enhancements: a clarification of the behavior of multiple versions of Keycloak Operator installed in the same cluster operator; and improved error logging during a transaction commit. More details on this release may be found in the release notes.
JDKUpdaterVersion 14.0.59+79 of JDKUpdater, a new utility that provides developers the ability to keep track of updates related to builds of OpenJDK and GraalVM. Introduced in mid-March by Gerrit Grunwald, principal engineer at Azul, this release resolves an issue with the calculation of the next update and the next release date of the JDK. More details on this release may be found in the release notes.
GradleThe first release candidate of Gradle 8.11.0 delivers new features such as: improved performance in the configuration cache with an opt-in parallel loading and storing of cache entries; the C++ and Swift plugins now compatible with the configuration cache; and improved error and warning reporting in which Java compilation errors are now displayed at the end of the build output. More details on this release may be found in the release notes.
Interview: Why Java Is The Future Of Cloud Applications
The fact that ARM64 processors are low powered in terms of energy consumption means more servers can be crammed into the same volume of datacentre space than x86 hardware.
If workloads can run on ARM64 hardware, there is potentially more processing power available per datacentre rack. Each ARM-based rack, for example, consumes less power and requires less cooling infrastructure than the datacentre energy and cooling needs of x86 server racks.
Scott Sellers is CEO of Azul, a company that offers an alternative to the Oracle Java Development Kit (JDK) called Azure Platform Core for developing and running enterprise Java applications. In an interview with Computer Weekly, Sellers discusses the impact of processor architectures on enterprise software development and why the original "write once run anywhere" mantra of Java is more important than ever.
It is no longer the case that the only target platform for enterprise applications is an Intel or AMD-powered x86 server. Graphics processing units (GPUs) from Nvidia and the presence of alternative server chips from ARM mean the choice of target server platform is an important decision when deploying enterprise applications.
The rise of ARM"There's no question that the innovation on the ARM64 architecture is having a profound impact on the market," says Sellers. For instance, he points out, Amazon has made significant investments in developing ARM64-based server architectures for Amazon Web Services (AWS), while Microsoft and Google also have ARM server initiatives.
"It's an inherently more cost-effective platform compared to x86 servers," he adds. "At this point in time, performance is equal to, if not better than, x86, and the overall power efficiency is materially better."
According to Sellers, there is a lot of momentum behind ARM64 workloads. While public clouds generally support multiple programming languages, including Python, Java, C++ and Rust, using programming languages that need to be compiled for a target platform will mean revisiting source code when migrating between x86 and ARM-based servers. Interpreted languages such as Python and Java, which are compiled "just in time" when the application runs, do not require applications to be recompiled.
The beauty of Java is that the application doesn't have to be modified. No changes are necessary. It really does just work Scott Sellers, Azul
"The beauty of Java is that the application doesn't have to be modified. No changes are necessary. It really does just work," he says.
According to Sellers, replatforming efforts usually involve a lot of work and a lot of testing, which makes it far more difficult for them to migrate cloud workloads from x86 servers onto ARM64. "If you base your applications on Java, you're not having to make these bets. You can make them dynamically based on what's available," he says.
This effectively means that in public cloud infrastructure as a service, a Java developer simply writes the code once and the Java runtime compiler generates the machine code instructions for the target processor when the code is run. IT decision-makers can assess cost and performance dynamically, and choose the processor architecture based on cost or the performance level they need.
Sellers claims Java runs exceptionally well both on x86 and ARM64 platforms. He says Azul customers are seeing a 30% to 40% performance benefit using the company's Java runtime engine. "That's true of both x86 and ARM64," he adds.
Sellers says IT leaders can take advantage of the performance and efficiency boost available on the ARM64 platform without the need to make any changes to the target workload. In the public cloud, he says this not only saves money – since the workload uses less cloud-based processing to achieve the same level of performance – but the workload also runs faster.
The decision on which platform to deploy a workload is something Sellers feels should be assessed as part of a return on investment calculation. "For the same amount of memory and processing capability, an ARM64 compute node is typically about 20% cheaper than the x86 equivalent," he says. This, he adds, is good for the tech sector. "Frankly, it keeps Intel and AMD honest."
He adds: "Some of our bigger customers now simply have hybrid deployments in the cloud, and by hybrid, what I mean is they're running x86 and ARM64 simultaneously to get the best of all worlds."
What Sellers is referring to is the fact that while customers may indeed want to run workloads on ARM64 infrastructure, there is far more x86 kit deployed in public cloud infrastructure.
While this is set to change over time, according to Sellers, many of Azul's biggest customers cannot purchase enough ARM64 compute nodes from public cloud providers, which means they have to hedge their bets a bit. Nevertheless, Sellers regards ARM64 as something that will inevitably become a dominant force in public cloud computing infrastructure.
Why it is not always about GPUsNvidia has seen huge demand for its graphics processing units (GPUs) to power artificial intelligence (AI) workloads in the datacentre. GPUs pack hundreds of relatively simple processor cores into a single device, which can then be programmed to run in parallel, achieving the acceleration required in AI inference and machine learning workloads.
Sellers describes AI as an "embarrassingly parallel" problem, which can be solved using a high number of GPU processing cores, each running a relatively simple set of instructions. This is why the GPU has become the engine of AI. But this does not make it suitable for all applications that require a high degree of parallelism, where several complex tasks are programmed to run simultaneously.
For one of Azul's customers, financial exchange LMAX Group, Sellers says GPUs would never work. "They would be way too slow and the LMAX use case is nowhere near as inherently parallel as AI."
GPUs, he says, are useful in accelerating a very specific type of application, where a relatively simple piece of processing can be distributed across many processor cores. But a GPU is not suitable for enterprise applications that require complex code to be run in parallel across multiple processors.
Beyond the hardware debate over whether to use GPUs in enterprise applications, Sellers believes the choice of programming language is an important consideration when coding AI software that targets GPUs.
While people are familiar with programming AI applications in Python, he says: "What people don't recognise is that Python code is not really doing anything. Python is just the front end to offload work to the GPUs."
Sellers says Java is better suited than other programming languages for developing and running traditional enterprise applications that require a high degree of parallelism.
While Nvidia offers the GPU language CUDA, when writing traditional enterprise applications, Sellers says Java is the only programming language that has true vector capabilities and massive multithreading capabilities. According to Sellers, these make Java a better language for programming applications requiring parallel computing capabilities. With virtual threading, which came out with Java 21, it becomes easier to write, maintain and debug high-throughput concurrent applications.
"Threading, as well as vectorisation, which enables more than one computer operation to be run simultaneously, have become a lot better over the last few Java releases," he adds.
Given Azul's product offerings, Sellers is clearly going to exalt the virtues of the Java programming language. However, there is one common thread in the conversation that IT decision-makers should consider. Assuming the future of enterprise IT is one dominated by a cloud-native architecture, even when some workloads must be run on-premise, IT leaders need to address a new reality that x86 is not the only game in town.
Comments
Post a Comment