News Posts matching #Red Hat

Return to Keyword Browsing

AMD Updates ROCm to Support Ryzen AI Max and Radeon RX 9000 Series

AMD announced its Radeon Open Compute (ROCm) platform with hardware acceleration support for the Ryzen AI Max 300 "Strix Halo" client processors, and the Radeon RX 9000 series gaming GPUs. For the Ryzen AI Max 300 "Strix Halo," this would unlock the compute power of the 40 RDNA 3.5 compute units, with their 80 AI accelerators, and 2,560 stream processors, besides the AI-specific ISA of the up to 16 "Zen 5" CPU cores, including their full fat 512-bit FPU for executing AVX512 instructions. For the Radeon RX 9000 series, this would mean putting those up to 64 RDNA 4 compute units with up to 128 AI accelerators and up to 4,096 stream processors to use.

AMD also announced that it has updated the ROCm product stack with support for various main distributions of Linux, including OpenSuSE (available now), Ubuntu, and Red Hat EPEL, with the latter two getting ROCm support in the second half of 2025. Lastly, ROCm gets full Windows support, including Pytorch and ONNX-EP. A preview of the Pytorch support can be expected in Q3-2025, while a preview for ONNX-EP could arrive in July 2025.

Red Hat & AMD Strengthen Strategic Collaboration - Leading to More Efficient GenAI

Red Hat, the world's leading provider of open source solutions, and AMD today announced a strategic collaboration to propel AI capabilities and optimize virtualized infrastructure. With this deepened alliance, Red Hat and AMD will expand customer choice across the hybrid cloud, from deploying optimized, efficient AI models to more cost-effectively modernizing traditional virtual machines (VMs). As workload demand and diversity continue to rise with the introduction of AI, organizations must have the capacity and resources to meet these escalating requirements. The average datacenter, however, is dedicated primarily to traditional IT systems, leaving little room to support intensive workloads such as AI. To answer this need, Red Hat and AMD are bringing together the power of Red Hat's industry-leading open source solutions with the comprehensive portfolio of AMD high-performance computing architectures.

AMD and Red Hat: Driving to more efficient generative AI
Red Hat and AMD are combining the power of Red Hat AI with the AMD portfolio of x86-based processors and GPU architectures to support optimized, cost-efficient and production-ready environments for AI-enabled workloads. AMD Instinct GPUs are now fully enabled on Red Hat OpenShift AI, empowering customers with the high-performing processing power necessary for AI deployments across the hybrid cloud without extreme resource requirements. In addition, using AMD Instinct MI300X GPUs with Red Hat Enterprise Linux AI, Red Hat and AMD conducted testing on Microsoft Azure ND MI300X v5 to successfully demonstrate AI inferencing for scaling small language models (SLMs) as well as large language models (LLM) deployed across multiple GPUs on a single VM, reducing the need to deploy across multiple VMs and reducing performance costs.

Red Hat Introduces Red Hat Enterprise Linux 10

Red Hat, the world's leading provider of open source solutions, today introduced Red Hat Enterprise Linux 10, the evolution of the world's leading enterprise Linux platform to help meet the dynamic demands of hybrid cloud and the transformative power of AI. More than just an iteration, Red Hat Enterprise Linux 10 provides a strategic and intelligent backbone for enterprise IT to navigate increasing complexity, accelerate innovation and build a more secure computing foundation for the future.

As enterprise IT grapples with the proliferation of hybrid environments and the imperative to integrate AI workloads, the need for an intelligent, resilient and durable operating system has never been greater. Red Hat Enterprise Linux 10 rises to this challenge, delivering a platform engineered for agility, flexibility and manageability, all while retaining a strong security posture against the software threats of the future.

IBM Intros LinuxONE Emperor 5 Mainframe with Telum II Processor

IBM has introduced the LinuxONE Emperor 5, its newest Linux computing platform that runs on the Telum II processor with built-in AI acceleration features. This launch aims to tackle three key issues for tech leaders: better security measures, reduced costs, and smooth AI incorporation into business systems. The heart of the system, the Telum II processor, includes a second-generation on-chip AI accelerator. This component is designed to boost predictive AI abilities and large language models for instant transaction handling. The upcoming IBM Spyre Accelerator (set to arrive in late 2025) via PCIe card will boost generative AI functions. The platform comes with an updated AI Toolkit fine-tuned for the Telum II processor. It also offers early looks at Red Hat OpenShift AI and Virtualization allowing unified control of both standard virtual machines and containerized workloads.

The platform provides wide-ranging security measures. These include confidential computing strong cryptographic abilities, and NIST-approved post-quantum algorithms. These safeguard sensitive AI models and data from current risks and expected post-quantum attacks. When it comes to productivity, companies can combine several server workloads on one high-capacity system. This might cut ownership expenses by up to 44% compared to x86 options over five years. At the same time, it keeps exceptional 99.999999% uptime rates according to IBM. The LinuxOne Emperor 5 will run Linux Red Hat Enterprise Linux (RHEL), SUSE Linux Enterprise Server (SLES) and Canonical Ubuntu Server. Tina Tarquinio, chief product officer at IBM Z and LinuxONE, said: "IBM LinuxONE 5 represents the next evolution of our Linux infrastructure strategy. It is designed to help clients unlock the full potential of Linux and AI while optimizing their datacenters, simplifying their operations, and addressing risk. Whether you're building intelligent applications, deploying regulated workloads, consolidating infrastructure, or preparing for the next wave of transformation, IBM LinuxONE offers an exciting path forward."

IBM & Oracle Expand Partnership - Aim to Advance Agentic AI and Hybrid Cloud

IBM is working with Oracle to bring the power of watsonx, IBM's flagship portfolio of AI products, to Oracle Cloud Infrastructure (OCI). Leveraging OCI's native AI services, the latest milestone in IBM's technology partnership with Oracle is designed to fuel a new era of multi-agentic, AI-driven productivity and efficiency across the enterprise. Organizations today are deploying AI throughout their operations, looking to take advantage of the extraordinary advancements in generative AI models, tools, and agents. AI agents that can provide a single, easy-to-use interface to complete tasks are emerging as key tools to help simplify the deployment and use of AI across enterprise operations and functions. "AI delivers the most impactful value when it works seamlessly across an entire business," said Greg Pavlik, executive vice president, AI and Data Management Services, Oracle Cloud Infrastructure. "IBM and Oracle have been collaborating to drive customer success for decades, and our expanded partnership will provide customers new ways to help transform their businesses with AI."

Watsonx Orchestrate to support multi-agent workflows
To give customers a consistent way to build and manage agents across multi-agent, multi-system business processes, spanning both Oracle and non-Oracle applications and data sources, IBM is making its watsonx Orchestrate AI agent offerings available on OCI in July. This multi-agent approach using wastonx Orchestrate is designed to work with the expansive AI agent offerings embedded within the Oracle AI Agent Studio for Fusion Applications, as well as OCI Generative AI Agents, and OCI's other AI services. It extends the ecosystem around Oracle Fusion Applications to enable further functionality across third-party and custom applications and data sources. The first use cases being addressed are in human resources. The watsonx Orchestrate agents will perform AI inferencing on OCI, which many customers use to host their data, AI, and other applications. IBM agents run in watsonx Orchestrate on Red Hat OpenShift on OCI, including in public, sovereign, government, and Oracle Alloy regions, to enable customers to address specific regulatory and privacy requirements. The agents can also be hosted on-premises or in multicloud environments for true hybrid cloud capabilities.

IBM Cloud is First Service Provider to Deploy Intel Gaudi 3

IBM is the first cloud service provider to make Intel Gaudi 3 AI accelerators available to customers, a move designed to make powerful artificial intelligence capabilities more accessible and to directly address the high cost of specialized AI hardware. For Intel, the rollout on IBM Cloud marks the first major commercial deployment of Gaudi 3, bringing choice to the market. By leveraging Intel Gaudi 3 on IBM Cloud, the two companies aim to help clients cost-effectively test, innovate and deploy GenAI solutions.

According to a recent forecast by research firm Gartner, worldwide generative AI (GenAI) spending is expected to total $644 billion in 2025, an increase of 76.4% from 2024. The research found "GenAI will have a transformative impact across all aspects of IT spending markets, suggesting a future where AI technologies become increasingly integral to business operations and consumer products."

IBM & Intel Announce the Availability of Gaudi 3 AI Accelerators on IBM Cloud

Yesterday, at Intel Vision 2025, IBM announced the availability of Intel Gaudi 3 AI accelerators on IBM Cloud. This offering delivers Intel Gaudi 3 in a public cloud environment for production workloads. Through this collaboration, IBM Cloud aims to help clients more cost-effectively scale and deploy enterprise AI. Intel Gaudi 3 AI accelerators on IBM Cloud are currently available in Frankfurt (eu-de) and Washington, D.C. (us-east) IBM Cloud regions, with future availability for the Dallas (us-south) IBM Cloud region in Q2 2025.

IBM's AI in Action 2024 report found that 67% of surveyed leaders reported revenue increases of 25% or more due to including AI in business operations. Although AI is demonstrating promising revenue increases, enterprises are also balancing the costs associated with the infrastructure needed to drive performance. By leveraging Intel's Gaudi 3 on IBM Cloud, the two companies are aiming to help clients more cost effectively test, innovate and deploy generative AI solutions. "By bringing Intel Gaudi 3 AI accelerators to IBM Cloud, we're enabling businesses to help scale generative AI workloads with optimized performance for inferencing and fine-tuning. This collaboration underscores our shared commitment to making AI more accessible and cost-effective for enterprises worldwide," said Saurabh Kulkarni, Vice President, Datacenter AI Strategy and Product Management, Intel.

Qualcomm and IBM Scale Enterprise-grade Generative AI from Edge to Cloud

Ahead of Mobile World Congress 2025, Qualcomm Technologies, Inc. and IBM (NYSE: IBM) announced an expanded collaboration to drive enterprise-grade generative artificial intelligence (AI) solutions across edge and cloud devices designed to enable increased immediacy, privacy, reliability, personalization, and reduced cost and energy consumption. Through this collaboration, the companies plan to integrate watsonx.governance for generative AI solutions powered by Qualcomm Technologies' platforms, and enable support for IBM's Granite models through the Qualcomm AI Inference Suite and Qualcomm AI Hub.

"At Qualcomm Technologies, we are excited to join forces with IBM to deliver cutting-edge, enterprise-grade generative AI solutions for devices across the edge and cloud," said Durga Malladi, senior vice president and general manager, technology planning and edge solutions, Qualcomm Technologies, Inc. "This collaboration enables businesses to deploy AI solutions that are not only fast and personalized but also come with robust governance, monitoring, and decision-making capabilities, with the ability to enhance the overall reliability of AI from edge to cloud."

Intel and AMD Form x86 Ecosystem Advisory Group

Intel Corp. (INTC) and AMD (NASDAQ: AMD) today announced the creation of an x86 ecosystem advisory group bringing together technology leaders to shape the future of the world's most widely used computing architecture. x86 is uniquely positioned to meet customers' emerging needs by delivering superior performance and seamless interoperability across hardware and software platforms. The group will focus on identifying new ways to expand the x86 ecosystem by enabling compatibility across platforms, simplifying software development, and providing developers with a platform to identify architectural needs and features to create innovative and scalable solutions for the future.

For over four decades, x86 has served as the bedrock of modern computing, establishing itself as the preferred architecture in data centers and PCs worldwide. In today's evolving landscape - characterized by dynamic AI workloads, custom chiplets, and advancements in 3D packaging and system architectures - the importance of a robust and expanding x86 ecosystem is more crucial than ever.

Micron First to Achieve Qualification Sample Milestone to Accelerate Ecosystem Adoption of CXL 2.0 Memory

Micron Technology, a leader in innovative data center solutions, today announced it has achieved its qualification sample milestone for the Micron CZ120 memory expansion modules using Compute Express Link (CXL). Micron is the first in the industry to achieve this milestone, which accelerates the adoption of CXL solutions within the data center to tackle the growing memory challenges stemming from existing data-intensive workloads and emerging artificial intelligence (AI) and machine learning (ML) workloads.

Using a new and emerging CXL standard, the CZ120 required substantial hardware testing for reliability, quality and performance across CPU providers and OEMs, along with comprehensive software testing for compatibility and compliance with OS and hypervisor vendors. This achievement reflects the collaboration and commitment across the data center ecosystem to validate the advantages of CXL memory. By testing the combined products for interoperability and compatibility across hardware and software, the Micron CZ120 memory expansion modules satisfy the rigorous standards for reliability, quality and performance required by customers' data centers.

Researcher's Curiosity Uncovers Backdoor in Popular Linux Utility, Compromising SSH Connections

In a interesting discovery that sent a series of shockwaves through the Linux community, Andres Freund, Principal Software Engineer at Microsoft, located a malicious backdoor in the widely used compression tool called "xz Utils." The backdoor, introduced in versions 5.6.0 and 5.6.1 of the utility, can break the robust encryption provided by the Secure Shell (SSH) protocol, allowing unauthorized access to affected systems. What Andres Freund found is that the latest version of xz Utils is taking 0.5 seconds in SSH on his system, while the older system with the older version took 0.1 seconds for simple processing, prompting the user to investigate and later send a widespread act for caution. While there are no confirmed reports of the backdoored versions being incorporated into production releases of major Linux distributions, the incident has raised serious concerns among users and developers alike.

Red Hat and Debian, two of the most well-known Linux distribution developers, have reported that their recently published beta releases, including Fedora 40, Fedora Rawhide, and Debian testing, unstable, and experimental distributions, used at least one of the affected versions of xz Utils. According to Red Hat officials, the first signs of the backdoor were introduced in a February 23 update, which added obfuscated (unreadable) code to xz Utils. A subsequent update the following day introduced functions for deobfuscating the code and injecting it into code libraries during the utility's update process. The malicious code has been cleverly hidden only in the tarballs, which target upstream releases of Linux distributions.

Alibaba Unveils Plans for Server-Grade RISC-V Processor and RISC-V Laptop

Chinese e-commerce and cloud giant Alibaba announced its plans to launch a server-grade RISC-V processor later this year, and it showcased a RISC-V-powered laptop running an open-source operating system. The announcements were made by Alibaba's research division, the Damo Academy, at the recent Xuantie RISC-V Ecological Conference in Shenzhen. The upcoming server-class processor called the Xuantie C930, is expected to be launched by the end of 2024. While specific details about the chip have not been disclosed, it is anticipated to cater to AI and server workloads. This development is part of Alibaba's ongoing efforts to expand its RISC-V portfolio and reduce reliance on foreign chip technologies amidst US export restrictions. To complement the C930, Alibaba is also preparing a Xuantie 907 matrix processing unit for AI, which could be an IP block inside an SoC like the C930 or an SoC of its own.

In addition to the C930, Alibaba showcased the RuyiBOOK, a laptop powered by the company's existing T-Head C910 processor. The C910, previously designed for edge servers, AI, and telecommunications applications, has been adapted for use in laptops. Strangely, the RuyiBOOK laptop runs on the openEuler operating system, an open-source version of Huawei's EulerOS, which is based on Red Hat Linux. The laptop also features Alibaba's collaboration suite, Ding Talk, and the open-source office software Libre Office, demonstrating its potential to cater to the needs of Chinese knowledge workers and consumers without relying on foreign software. Zhang Jianfeng, president of the Damo Academy, emphasized the increasing demand for new computing power and the potential for RISC-V to enter a period of "application explosion." Alibaba plans to continue investing in RISC-V research and development and fostering collaboration within the industry to promote innovation and growth in the RISC-V ecosystem, lessening reliance on US-sourced technology.

ZOTAC Expands Computing Hardware with GPU Server Product Line for the AI-Bound Future

ZOTAC Technology Limited, a global leader in innovative technology solutions, expands its product portfolio with the GPU Server Series. The first series of products in ZOTAC's Enterprise lineup offers organizations affordable and high-performance computing solutions for a wide range of demanding applications, from core-to-edge inferencing and data visualization to model training, HPC modeling, and simulation.

The ZOTAC series of GPU Servers comes in a diverse range of form factors and configurations, featuring both Tower Workstations and Rack Mount Servers, as well as both Intel and AMD processor configurations. With support for up to 10 GPUs, modular design for easier access to internal hardware, a high space-to-performance ratio, and industry-standard features like redundant power supplies and extensive cooling options, ZOTAC's enterprise solutions can ensure optimal performance and durability, even under sustained intense workloads.

IBM Introduces LinuxONE 4 Express, a Value-oriented Hybrid Cloud & AI Platform

IBM has announced IBM LinuxONE 4 Express, extending the latest performance, security and AI capabilities of LinuxONE to small and medium sized businesses and within new data center environments. The pre-configured rack mount system is designed to offer cost savings and to remove client guess work when spinning up workloads quickly and getting started with the platform to address new and traditional use cases such as digital assets, medical imaging with AI, and workload consolidation.

Building an integrated hybrid cloud strategy for today and years to come
As businesses move their products and services online quickly, oftentimes, they are left with a hybrid cloud environment created by default, with siloed stacks that are not conducive to alignment across businesses or the introduction of AI. In a recent IBM IBV survey, 84% of executives asked acknowledged their enterprise struggles in eliminating silo-to-silo handoffs. And 78% of responding executives said that an inadequate operating model impedes successful adoption of their multicloud platform. With the pressure to accelerate and scale the impact of data and AI across the enterprise - and improve business outcomes - another approach that organizations can take is to more carefully identify which workloads should be on-premises vs in the cloud.

IBM Storage Ceph Positioned as the Ideal Foundation for Modern Data Lakehouses

It's been one year since IBM integrated Red Hat storage product roadmaps and teams into IBM Storage. In that time, organizations have been faced with unprecedented data challenges to scale AI due to the rapid growth of data in more locations and formats, but with poorer quality. Helping clients combat this problem has meant modernizing their infrastructure with cutting-edge solutions as a part of their digital transformations. Largely, this involves delivering consistent application and data storage across on-premises and cloud environments. Also, crucially, this includes helping clients adopt cloud-native architectures to realize the benefits of public cloud like cost, speed, and elasticity. Formerly Red Hat Ceph—now IBM Storage Ceph—a state-of-the-art open-source software-defined storage platform, is a keystone in this effort.

Software-defined storage (SDS) has emerged as a transformative force when it comes to data management, offering a host of advantages over traditional legacy storage arrays including extreme flexibility and scalability that are well-suited to handle modern uses cases like generative AI. With IBM Storage Ceph, storage resources are abstracted from the underlying hardware, allowing for dynamic allocation and efficient utilization of data storage. This flexibility not only simplifies management but also enhances agility in adapting to evolving business needs and scaling compute and capacity as new workloads are introduced. This self-healing and self-managing platform is designed to deliver unified file, block, and object storage services at scale on industry standard hardware. Unified storage helps provide clients a bridge from legacy applications running on independent file or block storage to a common platform that includes those and object storage in a single appliance.

Samsung Electronics and Red Hat Partnership to Lead Expansion of CXL Memory Ecosystem with Key Milestone

Samsung Electronics Co., Ltd a world leader in advanced memory technology, today announced that for the first time in the industry, it has successfully verified Compute Express Link (CXL) memory operations in a real user environment with open-source software provider Red Hat, leading the expansion of its CXL ecosystem. Due to the exponential growth of data throughput and memory requirements for emerging fields like generative AI, autonomous driving and in-memory databases (IMDBs), the demand for systems with greater memory bandwidth and capacity is also increasing. CXL is a unified interface standard that connects various processors, such as CPUs, GPUs and memory devices through a PCIe interface that can serve as a solution for limitations in existing systems in terms of speed, latency and expandability.

"Samsung has been working closely with a wide range of industry partners in areas from software, data centers and servers to chipset providers, and has been at the forefront of building up the CXL memory ecosystem," said Yongcheol Bae, Executive Vice President of Memory Product Planning at Samsung Electronics. "Our CXL partnership with Red Hat is an exemplary case of collaboration between advanced software and hardware, which will enrich and accelerate the CXL ecosystem as a whole."

OnLogic Helix 511 Fanless Industrial Computer Connects Modern and Legacy Systems

To help bridge the growing gap between modern systems and the legacy technology still in use around the world, global industrial computing specialists, OnLogic (www.onlogic.com), have released the Helix 511 Edge Computer. Designed for use in manufacturing, automation, energy management, and other edge and IoT applications, the Helix 511 easily interfaces with on-site systems thanks to a broad selection of modern and legacy connectivity options.

"We frequently have conversations with customers who are struggling to update their technology infrastructure simply because they can't connect existing systems to newer, more powerful and more capable devices," says OnLogic Product Manager, Hunter Golden. "The embedded computing space has traditionally lagged behind when it comes to adopting new technologies. We want to shift that paradigm while still allowing innovators to access and exchange data with their existing equipment. In many cases, you don't need to rip and replace everything, you just need a Helix 511."

OnLogic Launches Helix 401 Compact Fanless Computer

In response to the increasing demand for powerful computing that can be relied on in a wide range of even the most challenging installation environments, leading industrial computing manufacturer and solution provider, OnLogic (www.onlogic.com), has unveiled their Helix 401 fanless industrial computer. The compact device is designed for use in edge computing, industry 4.0, Internet of Things (IoT), and many other emerging applications and will make its public debut at Embedded World 2023.

"You may never see them, but industrial computers are everywhere, working diligently to power technology solutions of every shape and size. These systems need to be small and reliable while still being just as powerful as high-end desktop machines," says Mike Walsh, Senior Product Manager at OnLogic. "The Helix 401 balances size and performance while providing a wide range of configuration possibilities to help users tailor it to their specific application. It's small enough to fit in your hand, similar in size to Intel's popular NUC, but capable enough to drive advanced automation solutions and power the next great smart agriculture, building automation or energy management innovation."

Intel Accelerates 5G Leadership with New Products

For more than a decade, Intel and its partners have been on a mission to virtualize the world's networks, from the core to the RAN (radio access network) and out to the edge, moving them from fixed-function hardware onto programmable, software-defined platforms, making networks more agile while driving down their complexity and cost.

Now operators are looking to cross the next chasm in delivering cloud-native functionality for automating, managing and responding to an increasingly diverse mix of data and services, providing organizations with the intelligence needed at the edge of their operations. Today, Intel announced a range of products and solutions driving this transition and broad industry support from leading operators, original equipment manufacturers (OEMs) and independent software vendors (ISVs).

Intel Accelerates Developer Innovation with Open, Software-First Approach

On Day 2 of Intel Innovation, Intel illustrated how its efforts and investments to foster an open ecosystem catalyze community innovation, from silicon to systems to apps and across all levels of the software stack. Through an expanding array of platforms, tools and solutions, Intel is focused on helping developers become more productive and more capable of realizing their potential for positive social good. The company introduced new tools to support developers in artificial intelligence, security and quantum computing, and announced the first customers of its new Project Amber attestation service.

"We are making good on our software-first strategy by empowering an open ecosystem that will enable us to collectively and continuously innovate," said Intel Chief Technology Officer Greg Lavender. "We are committed members of the developer community and our breadth and depth of hardware and software assets facilitate the scaling of opportunities for all through co-innovation and collaboration."

Arm Announces Next-Generation Neoverse Cores for High Performance Computing

The demand for data is insatiable, from 5G to the cloud to smart cities. As a society we want more autonomy, information to fuel our decisions and habits, and connection - to people, stories, and experiences.

To address these demands, the cloud infrastructure of tomorrow will need to handle the coming data explosion and the effective processing of evermore complex workloads … all while increasing power efficiency and minimizing carbon footprint. It's why the industry is increasingly looking to the performance, power efficiency, specialized processing and workload acceleration enabled by Arm Neoverse to redefine and transform the world's computing infrastructure.

Samsung & Red Hat Announce Collaboration in the Field of Next-Generation Memory Software

Samsung Electronics and Red Hat today announced a broad collaboration on software technologies for next-generation memory solutions. The partnership will focus on the development and validation of open source software for existing and emerging memory and storage products, including NVMe SSDs; CXL memory; computational memory/storage (HBM-PIM, Smart SSDs) and fabrics — in building an expansive ecosystem for closely integrated memory hardware and software. The exponential growth of data driven by AI, AR and the fast-approaching metaverse is bringing disruptive changes to memory designs, requiring more sophisticated software technologies that better link with the latest hardware advancements.

"Samsung and Red Hat will make a concerted effort to define and standardize memory software solutions that embrace evolving server and memory hardware, while building a more robust memory ecosystem," said Yongcheol Bae, Executive Vice President and Head of the Memory Application Engineering Team at Samsung Electronics. "We will invite partners from across the IT industry to join us in expanding the software-hardware memory ecosystem to create greater customer value."

IBM Unveils New Generation of IBM Power Servers for Frictionless, Scalable Hybrid Cloud

IBM (NYSE: IBM) today announced the new IBM Power E1080 server, the first in a new family of servers based on the new IBM Power10 processor, designed specifically for hybrid cloud environments. The IBM Power10-equipped E1080 server is engineered to be one of the most secured server platforms and is designed to help clients operate a secured, frictionless hybrid cloud experience across their entire IT infrastructure.

The IBM Power E1080 server is launching at a critical time for IT. As organizations around the world continue to adapt to unpredictable changes in consumer behaviors and needs, they need a platform that can deliver their applications and insights securely where and when they need them. The IBM Institute of Business Value's 2021 CEO Study found that, of the 3,000 CEOs surveyed, 56% emphasized the need to enhance operational agility and flexibility when asked what they'll most aggressively pursue over the next two to three years.

Linux Foundation to Form New Open 3D Foundation

The Linux Foundation, the nonprofit organization enabling mass innovation through open source, today announced an intent to form the Open 3D Foundation to accelerate developer collaboration on 3D game and simulation technology. The Open 3D Foundation will support open source projects that advance capabilities related to 3D graphics, rendering, authoring, and development. As the first project governed by the new foundation, Amazon Web Services, Inc. (AWS) is contributing an updated version of the Amazon Lumberyard game engine as the Open 3D Engine (O3DE), under the permissive Apache 2.0 license. The Open 3D Engine enables developers and content creators to build 3D experiences unencumbered by commercial terms and will provide the support and infrastructure of an open source community through forums, code repositories, and developer events. A developer preview of O3DE is available on GitHub today. For more information and/or to contribute, please visit: https://o3de.org

3D engines are used to create a range of virtual experiences, including games and simulations, by providing capabilities such as 3D rendering, content authoring tools, animation, physics systems, and asset processing. Many developers are seeking ways to build their intellectual property on top of an open source engine where the roadmap is highly visible, openly governed, and collaborative to the community as a whole. More developers look to be able to create or augment their current technological foundations with highly collaborative solutions that can be used in any development environment. O3DE introduces a new ecosystem for developers and content creators to innovate, build, share, and distribute immersive 3D worlds that will inspire their users with rich experiences that bring the imaginations of their creators to life.

Lenovo Brings Linux Certification to ThinkPad and ThinkStation Workstation Portfolio

More than 250 million computers are sold each year and NetMarketShare reports that 2.87 percent - roughly 7.2 million users - are using those computers to run Linux. Once thought of as a niche IT crowd, this user base of data scientists, developers, application engineers, scientists and more is growing - stepping into sought-after roles across multiple industries and becoming essential within their companies. Now, I'm excited to share Lenovo is moving to certify the full workstation portfolio for top Linux distributions from Ubuntu and Red Hat - every model, every configuration.

While many users prefer to customize their own machines - either on hardware without an OS or by wiping an existing client OS, then configuring and installing Linux - this can raise uncertainty with system stability, restricted performance, compatibility, end-user productivity and even IT support for devices. Now that these users are making their way out of the proverbial shadows and onto the enterprise floor, the demand is high for an out-of-the-box solution that removes the barrier for deployment of enterprise-grade hardware within a Linux software ecosystem.
Return to Keyword Browsing
Jun 2nd, 2025 09:05 +03 change timezone

New Forum Posts

Popular Reviews

TPU on YouTube

Controversial News Posts

OSZAR »