News Posts matching #next generation

Return to Keyword Browsing

NVIDIA Reportedly Postpones SOCAMM Rollout; Could Debut with Next-gen "Rubin" AI GPUs

Around mid-February, South Korean sources alleged that NVIDIA was in the process of developing an innovative new memory form factor. The System on Chip Advanced Memory Module (SOCAMM) design is reportedly a collaborative effort. Team Green's usual set of memory partners—SK Hynix, Samsung, and Micron—were mentioned in early 2025 news articles. Just over a month later, official press material revealed a key forthcoming deployment—Micron stated: "(our) SOCAMM (product), a modular LPDDR5X memory solution, was developed in collaboration with NVIDIA to support the NVIDIA GB300 Grace Blackwell Ultra Superchip. In a (rumored) blow to all involved parties, ZDNet Korea posits that Team Green has postponed the commercialization of their "next-generation low-power DRAM module" IP. According to industry moles, the SOCAMM standard will not debut with this generation of enterprise-focused "Grace-Blackwell" chips. Instead, fresher theories indicate a postponement into next-gen territories—possibly rescheduled to arrive alongside the firm's "Rubin" GPU architecture.

NVIDIA has reportedly sent out notices to major memory partners—(alleged) May 14 updates were received by Samsung Electronics and SK Hynix (in South Korea) and Micron (USA). As a result, SOCAMM supply timelines are (apparently) adjusted. A newer "Cordelia" board design—acting as a substrate for GB300 chips, and compatible with SOCAMM—was in the picture. The latest whispers suggest a return to an existing "Bianca" board configuration, that supports current-gen LPDDR memory modules. ZDNet believes that company engineers have run into several obstacles: "Blackwell chips have been continuously experiencing difficulties in securing design and packaging yields. In fact, the 'Cordelia' board is known to have reliability issues, such as data loss, and SOCAMM has reliability issues, such as heat dissipation characteristics." NVIDIA briefly previewed its futuristic "Rubin Ultra" AI GPU design during GTC 2025—on-stage, a "second half of 2027" release window was teased.

NVIDIA & ServiceNow CEOs Jointly Present "Super Genius" Open-source Apriel Nemotron 15B LLM

ServiceNow is accelerating enterprise AI with a new reasoning model built in partnership with NVIDIA—enabling AI agents that respond in real time, handle complex workflows and scale functions like IT, HR and customer service teams worldwide. Unveiled today at ServiceNow's Knowledge 2025—where NVIDIA CEO and founder Jensen Huang joined ServiceNow chairman and CEO Bill McDermott during his keynote address—Apriel Nemotron 15B is compact, cost-efficient and tuned for action. It's designed to drive the next step forward in enterprise large language models (LLMs).

Apriel Nemotron 15B was developed with NVIDIA NeMo, the open NVIDIA Llama Nemotron Post-Training Dataset and ServiceNow domain-specific data, and was trained on NVIDIA DGX Cloud running on Amazon Web Services (AWS). The news follows the April release of the NVIDIA Llama Nemotron Ultra model, which harnesses the NVIDIA open dataset that ServiceNow used to build its Apriel Nemotron 15B model. Ultra is among the strongest open-source models at reasoning, including scientific reasoning, coding, advanced math and other agentic AI tasks.

MSI Presenting AI's Next Leap at Japan IT Week Spring 2025

MSI, a leading global provider of high-performance server solutions, is bringing AI-driven innovation to Japan IT Week Spring 2025 at Booth #21-2 with high-performance server platforms built for next-generation AI and cloud computing workloads. MSI's NVIDIA MGX AI Servers deliver modular GPU-accelerated computing to optimize AI training and inference, while the Core Compute line of Multi-Node Servers maximize compute density and efficiency for AI inference and cloud service provider workloads. MSI's Open Compute line of ORv3 Servers enhance scalability and thermal efficiency in hyperscale AI deployments. MSI's Enterprise Servers provide balanced compute, storage, and networking for seamless AI workloads across cloud and edge. With deep expertise in system integration and AI-driven infrastructure, MSI is advancing the next generation of intelligent computing solutions to power AI's next leap.

"AI's advancement hinges on performance efficiency, compute density, and workload scalability. MSI's server platforms are engineered to accelerate model training, optimize inference, and maximize resource utilization—ensuring enterprises have the processing power to turn AI potential into real-world impact," said Danny Hsu, General Manager of MSI Enterprise Platform Solutions.

NVIDIA Blackwell Platform Boosts Water Efficiency by Over 300x - "Chill Factor" for AI Infrastructure

Traditionally, data centers have relied on air cooling—where mechanical chillers circulate chilled air to absorb heat from servers, helping them maintain optimal conditions. But as AI models increase in size, and the use of AI reasoning models rises, maintaining those optimal conditions is not only getting harder and more expensive—but more energy-intensive. While data centers once operated at 20 kW per rack, today's hyperscale facilities can support over 135 kW per rack, making it an order of magnitude harder to dissipate the heat generated by high-density racks. To keep AI servers running at peak performance, a new approach is needed for efficiency and scalability.

One key solution is liquid cooling—by reducing dependence on chillers and enabling more efficient heat rejection, liquid cooling is driving the next generation of high-performance, energy-efficient AI infrastructure. The NVIDIA GB200 NVL72 and the NVIDIA GB300 NVL72 are rack-scale, liquid-cooled systems designed to handle the demanding tasks of trillion-parameter large language model inference. Their architecture is also specifically optimized for test-time scaling accuracy and performance, making it an ideal choice for running AI reasoning models while efficiently managing energy costs and heat.

Nintendo Confirms That Switch 2 Joy-Cons Will Not Utilize Hall Effect Stick Technology

Following last week's jam-packed Switch 2 presentation, Nintendo staffers engaged in conversation with media outlets. To the surprise of many, a high level member of the incoming console's design team was quite comfortable with his name-dropping of NVIDIA graphics technologies. Meanwhile, Team Green was tasked with the disclosing of Switch 2's "internal" workings. Attention has turned to the much anticipated-hybrid console's bundled-in detachable Joy-Cons—in the lead up to official unveilings, online debates swirled around potential next-gen controllers being upgraded with Hall Effect joystick modules. Many owners of first-gen Switch systems have expressed frustration regarding faulty Joy-Cons—eventually, Nintendo was coerced into offering free repairs for customers affected by dreaded "stick drift" issues. Unfortunately, it seems that the House of Mario has not opted to outfit its Gen 2.0 Joy-Cons with popular "anti-drift" tech.

As reported by Nintendo Life, Nate Bihldorff—senior vice president of product development and publishing at Nintendo of America—"outright confirmed the exclusion" of Hall Effect. Up until the publication of Nintendo Life's sit down interview, other company representatives have opined that Switch 2's default control system features very "durable feeling" sticks. When asked about the reason behind "new-gen modules (feeling) so different to the original Switch's analog stick," Bihldorff responded with: "well, the Joy-Con 2's controllers have been designed from the ground up. They're not Hall Effect sticks, but they feel really good. Did you experience both the Joy-Con and the Pro Controller?" The interviewer confirmed that they had prior experience with both new models. In response, Bihldorff continued: "so, I like both, but that Pro Controller, for some reason the first time I grabbed it, I was like, 'this feels like a GameCube controller.' I was a GameCube guy. Something about it felt so familiar, but the stick on that especially. I tried to spend a lot of time making sure that it was quiet. I don't know if you tried really whacking the stick around, but it really is (quiet)...(The Switch 2 Pro Controller) is one of the quietest controllers I've ever played." Nintendo will likely not discuss the "ins and outs" of its proprietary stick design, but inevitable independent teardowns of commercial hardware could verify the provenance of underlying mechanisms. Nowadays, hardcore game controller snobs prefer third-party solutions that sport Tunneling Magnetoresistance (TMR) joysticks.

Tenstorrent Launches Blackhole Developer Products at Tenstorrent Dev Day

Tenstorrent launched the next generation Blackhole chip family today at their DevDay event in San Francisco. Featuring all new RISC-V cores, Blackhole is built to handle massive AI workloads efficiently and offers an infinitely scalable solution.

Blackhole products are now available for order on tenstorrent.com:
  • Blackhole p100, powered by one processor without Ethernet, active-cooled: available for $999
  • Blackhole p150, powered by one processor with Ethernet, and available in passive-, active-, and liquid-cooled variants: available for $1,299
  • TT-Quiet box, a liquid-cooled desktop workstation powered by 4 Blackhole processors: available for $11,999

Lexar Ships the World's First 1 TB MicroSD Express Card for Use With Nintendo Switch 2

Lexar, a leading global brand of flash memory solutions, is excited to ship of the world's first 1 TB microSD Express card. Built on the new SD card standard that combines PCI Express 3.0 and NVMe 1.3 interfaces, the PLAY PRO microSDXC Express Card delivers substantially improved performance, perfect for handheld gaming devices.

With up to 900 MB/s read and 600 MB/s write, the PLAY PRO microSDXC Express Card offers the fastest speeds in the microSD Express card format and gives gamers an epic performance power-up that delivers faster game loads and accelerated downloads. With capacity up to 1 TB, it also offers space for many large AAA games. It is backwards-compatible with UHS-I and UHS-II host devices (at UHS-I speeds), but future-proofed for tomorrow's cutting-edge handheld gaming systems and other upcoming devices that will leverage this next-gen technology.

Wreckfest 2 Smashes into Early Access, Bugbear & THQ Nordic Reveal Launch System Requirements

Vienna, Austria / Helsinki, Finland, March 20th, 2025: Ladies and Gentlemen, the next generation of demolition derby madness is here. Wreckfest 2 is crashing into Early Access on Steam today, bringing next-level car destruction, slicker graphics, improved physics, and a whole lot of new features to fuel your inner petrolhead. And yes, it's once again developed by Bugbear Entertainment—the masters of metal-mangling mayhem.

Just like its predecessor, Wreckfest 2 will keep evolving throughout Early Access, with fresh content and features rolling in regularly. Bugbear and THQ Nordic are all about making this a game for the players, shaped by the players. We want your feedback - tell us what you love, what you want more of, and what crazy ideas we absolutely need to implement. We've got a truckload of ideas (and an actual truck), but we want to hear which ones you like the most!

MSI Outlines Claw 8 AI+ & Claw 7 AI+ Upgrades, Based on Original Claw User Feedback

The first MSI Claw was designed for ergonomic comfort and seamless gaming across platforms like Steam, Ubisoft, and Xbox. It even handled mobile games effortlessly. Now, thanks to valuable feedback from our community, we've made significant improvements in both hardware and software for the next generation—introducing the Claw 8 AI+ and Claw 7 AI+, powered by the latest Intel Lunar Lake processor for enhanced performance and efficiency.

Key Upgrades Based on Community Feedback
1) Enhanced Connectivity with Dual Thunderbolt 4 Ports. Both the Claw 8 AI+ and Claw 7 AI+ now feature two Thunderbolt 4 ports, allowing you to connect an external SSD without unplugging the power cord. This ensures greater flexibility and seamless gameplay.

HP Announces a Wide Range of New Products at its Amplify Conference

At its annual Amplify Conference, HP Inc. today announced new products and services designed to shape the future of work, empowering people and businesses to create and manage their own way of working. The company unveiled more than 80 PCs, AI-powered print tools for SMBs, and Workforce Experience Platform enhancements all built to drive company growth and professional fulfillment.

"HP is translating AI into meaningful experiences that drive growth and fulfillment," said Enrique Lores, President and CEO at HP Inc. "We are shaping the future of work with game-changing AI innovations that seamlessly adapt to how people want to work."

Ubisoft Summarizes Rainbow Six Siege X Showcase, Announces June 10 Release

The next evolution of Rainbow Six Siege was revealed today at the Siege X Showcase. Launching on June 10, Siege X will introduce Dual Front, a dynamic new 6v6 game mode, as well as deliver foundational upgrades to the core game (including visual enhancements, an audio overhaul, rappel upgrades, and more) alongside revamped player protection systems, and free access that will allow players to experience the unique tactical action of Rainbow Six Siege at no cost. Plus, from now through March 19, a free Dual Front closed beta is live on PC via Ubisoft Connect, PS5, and Xbox Series X|S, giving players a first chance to play the exciting new mode. Read on to find out how to get into the beta and try Dual Front for yourself.

Dual Front
Taking place on an entirely new map called District, Dual Front is a new mode that pits two teams of six Operators against each other in a fight to attack enemy sectors while defending their own. Players can choose from a curated roster of 35 Operators—both Attackers and Defenders - that will rotate twice per season. During each match, two objective points are live at all times, one in each team's lane; teams must plant a sabotage kit (akin to a defuser) in the opposing team's objective room and defend it in order to capture the sector and progress towards the final objective: the Base. Sabotage the Base to claim victory, but don't forget to defend your own sector, lest your foes progress faster than you and beat you to it.

Getac Introduces Next-gen AI-ready B360 and B360 Pro "Fully Rugged" Laptops

Getac Technology Corporation (Getac), a leading provider of rugged computing and mobile video solutions, today announced the launch of its next generation B360 and B360 Pro fully rugged laptops, offering professionals across industries including field services, utilities and defense two powerful, yet versatile solutions to overcome the daily challenges they face.

Next generation AI-ready performance
The next generation B360 and B360 Pro combine fully rugged build quality with a host of innovative new technology upgrades. This includes the latest Intel Core Ultra Series 2 processors and Intel AI Boost technology, which enables users to leverage on-device Edge AI to quickly and seamlessly execute tasks. In a recent text-to-report evaluation test conducted with Getac industry customers using Llama 3.1 8B, AI applications running on the B360 were able to turn extensive texts into full reports in a matter of seconds. This powerful Edge AI performance offers significant operational advantages over cloud AI, particularly in scenarios requiring real-time processing, high levels of data privacy and security, offline capability, and cost efficiency.

Physical SIM Support Reportedly in the Balance for Ultra-thin Smartphones w/ Snapdragon 8 Elite Gen 2 SoCs

According to Digital Chat Station—a repeat leaker of unannounced Qualcomm hardware—unnamed Android smartphone manufacturers are considering an eSIM-only operating model for future flagship devices. Starting with the iPhone 14 generation (2022), Apple has continued to deliver gadgets that are not reliant on "slotted-in" physical SIM cards. According to industry insiders, competitors could copy the market leader's homework—Digital Chat Station's latest Weibo blog post discusses the space-saving benefits of eSIM operation; being "conducive to lightweight and integrated design." Forthcoming top-tier slimline Android mobile devices are tipped to utilize Qualcomm's rumored second-generation "Snapdragon 8 Elite Gen 2" (SM8850) chipset.

Digital Chat Station reckons that: "SM8850 series phones at the end of the year are testing eSIM. Whether they can be implemented in China is still a question mark. Let's wait and see the iPhone 17 Air. In order to have an ultra-thin body, this phone directly cancels the physical SIM card slot. Either it will be a special phone for the domestic market, or it will get eSIM." The phasing out of physical SIM cards within the Chinese mobile market could be a tricky prospect for local OEMs, but reports suggest that "traditionally-dimensioned" flagship offerings will continue to support the familiar subscriber identity module standard. Physical SIM card purists often point out that the format still provides superior network support range.

RayNeo Unveils Next-Gen XR Glasses RayNeo Air 3s at MWC 2025

RayNeo, a leading innovator in consumer Augmented Reality (AR) technology, has unveiled its latest XR glasses, the RayNeo Air 3s, at MWC 2025. Alongside this groundbreaking release, the company showcased its other two innovations: the AI-powered, Full-Color AR Glasses RayNeo X3 Pro and the Camera AI Glasses RayNeo V3. Together, these cutting-edge devices underscore RayNeo's unwavering commitment to redefining immersive experiences and enhancing everyday usability through advanced AR solutions.

RayNeo Air 3s: Lightweight XR for Seamless Daily Use
The RayNeo Air 3s redefines the landscape of lightweight XR glasses, seamlessly merging portability with state-of-the-art display technology. Equipped with 3840 Hz high-frequency dimming, a staggering 200,000:1 contrast ratio, and a 154% sRGB color gamut, the Air 3s delivers breathtaking image quality, setting a new benchmark for birdbath display solutions. Its expansive 201-inch virtual screen and TÜV Rheinland-certified eye comfort technology ensure an immersive yet comfortable experience, making it ideal for all-day wear.

Lenovo Delivers Unmatched Flexibility, Performance and Design with New ThinkSystem V4 Servers Powered by Intel Xeon 6 Processors

Today, Lenovo announced three new infrastructure solutions, powered by Intel Xeon 6 processors, designed to modernize and elevate data centers of any size to AI-enabled powerhouses. The solutions include next generation Lenovo ThinkSystem V4 servers that deliver breakthrough performance and exceptional versatility to handle any workload while enabling powerful AI capabilities in compact, high-density designs. Whether deploying at the edge, co-locating or leveraging a hybrid cloud, Lenovo is delivering the right mix of solutions that seamlessly unlock intelligence and bring AI wherever it is needed.

The new Lenovo ThinkSystem servers are purpose-built to run the widest range of workloads, including the most compute intensive - from algorithmic trading to web serving, astrophysics to email, and CRM to CAE. Organizations can streamline management and boost productivity with the new systems, achieving up to 6.1x higher compute performance than previous generation CPUs with Intel Xeon 6 with P-cores and up to 2x the memory bandwidth when using new MRDIMM technology, to scale and accelerate AI everywhere.

Synopsys Expands Its Hardware-Assisted Verification (HAV) Portfolio for Next-Gen Semiconductors

Synopsys, Inc. today announced the expansion of its industry-leading hardware-assisted verification (HAV) portfolio with new HAPS prototyping and ZeBu emulation systems using the latest AMD Versal Premium VP1902 adaptive SoC. The next generation HAPS-200 prototyping and ZeBu-200 emulation systems deliver improved runtime performance, better compile time and improved debug productivity. They are built on new Synopsys Emulation and Prototyping (EP-Ready) Hardware that optimizes customer return on investment by enabling emulation and prototyping use cases via reconfiguration and optimized software. ZeBu Server 5 is enhanced to deliver industry-leading scalability beyond 60 billion gates (BG) to address the escalating hardware and software complexity in SoC and multi-die designs. It continues to offer industry-best density to optimize data center space utilization.

"With the industry approaching 100s of billions of gates per chip and 100s of millions of lines of software code in SoC and multi-die solutions, verification of advanced designs poses never-before seen challenges," said Ravi Subramanian, chief product management officer, Synopsys. "Continuing our strong partnership with AMD, our new systems deliver the highest HAV performance while offering the ultimate flexibility between prototyping and emulation use. Industry leaders are adopting Synopsys EP-Ready Hardware platforms for silicon to system verification and validation."

AMD & CEA Partner for AI Compute Advancements

AMD (NASDAQ: AMD) today announced the signing of a Letter of Intent (LOI) with the Commissariat à l'énergie atomique et aux énergies alternatives (CEA) of France to collaborate on the advanced technologies, component and system architectures that will shape the future of AI computing. The collaboration will leverage the strengths of both organizations to push the boundaries on energy-efficient systems needed to support the world's most compute-intensive AI workloads in fields from energy to medicine.

Through this initiative, AMD and CEA will engage in a structured collaboration, focused on technological advancements on next generation AI compute infrastructure. AMD and CEA also are planning a symposium on the future of AI compute in 2025 that will convene European stakeholders and global technology providers, startups, supercomputing centers, universities and policy makers to accelerate collaboration around state-of-the-art and emerging AI computing technologies.

CoreWeave Launches Debut Wave of NVIDIA GB200 NVL72-based Cloud Instances

AI reasoning models and agents are set to transform industries, but delivering their full potential at scale requires massive compute and optimized software. The "reasoning" process involves multiple models, generating many additional tokens, and demands infrastructure with a combination of high-speed communication, memory and compute to ensure real-time, high-quality results. To meet this demand, CoreWeave has launched NVIDIA GB200 NVL72-based instances, becoming the first cloud service provider to make the NVIDIA Blackwell platform generally available. With rack-scale NVIDIA NVLink across 72 NVIDIA Blackwell GPUs and 36 NVIDIA Grace CPUs, scaling to up to 110,000 GPUs with NVIDIA Quantum-2 InfiniBand networking, these instances provide the scale and performance needed to build and deploy the next generation of AI reasoning models and agents.

NVIDIA GB200 NVL72 on CoreWeave
NVIDIA GB200 NVL72 is a liquid-cooled, rack-scale solution with a 72-GPU NVLink domain, which enables the six dozen GPUs to act as a single massive GPU. NVIDIA Blackwell features many technological breakthroughs that accelerate inference token generation, boosting performance while reducing service costs. For example, fifth-generation NVLink enables 130 TB/s of GPU bandwidth in one 72-GPU NVLink domain, and the second-generation Transformer Engine enables FP4 for faster AI performance while maintaining high accuracy. CoreWeave's portfolio of managed cloud services is purpose-built for Blackwell. CoreWeave Kubernetes Service optimizes workload orchestration by exposing NVLink domain IDs, ensuring efficient scheduling within the same rack. Slurm on Kubernetes (SUNK) supports the topology block plug-in, enabling intelligent workload distribution across GB200 NVL72 racks. In addition, CoreWeave's Observability Platform provides real-time insights into NVLink performance, GPU utilization and temperatures.

ASUS ROG Takes a Closer Look at Astral GeForce RTX 5090 & 5080 Models

The next generation of graphics performance has arrived. We've prepared an all-new series of cards: ROG Astral. Featuring a new, sophisticated design and an outstanding cooling solution, the ROG Astral GeForce RTX 5090 and ROG Astral GeForce RTX 5080 are your premium picks for supercharging the performance of your gaming PC. All this new hardware in the ROG Astral GeForce RTX 5090 requires no small amount of power so that it can stretch its legs and run. Your PSU should be capable of at least 1000 W to run this card—more on that later. The circuitry that delivers this power is just as important, and it's one reason why many enthusiasts prefer ROG graphics cards. We've equipped the ROG Astral GeForce RTX 5090 and 5080 for premium power delivery with 80-amp MOSFETs that can supply over 35% more headroom than standard designs. A massive 24-phase VRM array for the GPU and a seven-phase VRM for the GDDR7 memory chips distribute the work of supplying power, ensuring rock-solid stability and long-lasting performance. To give you peace of mind that your 16-pin PCIe power connector is seated properly, we provide monitoring through Power Detector+ in the GPU Tweak III app so that you can verify that the connector is fully seated. The app can even tell you exactly which pin is not seated properly, if that ever becomes a concern.

Ada, meet Blackwell
With the GeForce RTX 50 Series, NVIDIA debuts its latest Blackwell architecture. Armed with fifth-gen Tensor cores, new streaming multiprocessors optimized for neural shaders, and fourth-gen Ray Tracing cores built for Mega Geometry, the new graphics cards unlock access to the next generation of graphics technologies. For many gamers, the highlight of the new architecture is DLSS 4. DLSS is a revolutionary suite of neural rendering technologies that uses AI to boost FPS, reduce latency, and improve image quality. The latest breakthrough, DLSS 4, brings new Multi Frame Generation and enhanced Ray Reconstruction and Super Resolution. But there's more. NVIDIA Reflex 2 with Frame Warp provides game-winning responsiveness, and these cards are equipped to give you the best experience with ray-traced graphics yet.

NVIDIA RTX 5090 Geekbench Leak: OpenCL and Vulkan Tests Reveal True Performance Uplifts

The RTX 50-series fever continues to rage on, with independent reviews for the RTX 5080 and RTX 5090 dropping towards the end of this month. That does not stop benchmarks from leaking out, unsurprisingly, and a recent lineup of Geekbench listings have revealed the raw performance uplifts that can be expected from NVIDIA's next generation GeForce flagship. A sizeable chunk of the tech community was certainly rather disappointed with NVIDIA's reliance on AI-powered frame generation for much of the claimed improvements in gaming. Now, it appears we can finally figure out how much raw improvement NVIDIA was able to squeeze out with consumer Blackwell, and the numbers, for the most part, appear decent enough.

Starting off with the OpenCL tests, the highest score that we have seen so far from the RTX 5090 puts it around 367,000 points, which marks an acceptable jump from the RTX 4090, which manages around 317,000 points according to Geekbench's official average data. Of course, there are a plethora of cards that may easily exceed the average scores, which must be kept in mind. That said, we are not aware of the details of the RTX 5090 that was tested, so pitting it against average scores does seem fair. Moving to Vulkan, the performance uplift is much more satisfying, with the RTX 5090 managing a minimum of 331,000 points and a maximum of around 360,000 points, compared to the RTX 4090's 262,000 - a sizeable 37% improvement at the highest end. Once again, we are comparing the best results posted so far against last year's averages, so expect slightly more modest gains in the real world. Once more reviews start appearing after the embargo lifts, the improvement figures should become much more reliable.

NVIDIA AI Expected to Transform $10 Trillion Healthcare & Life Sciences Industry

At yesterday's J.P. Morgan Healthcare Conference NVIDIA announced new partnerships to transform the $10 trillion healthcare and life sciences industry by accelerating drug discovery, enhancing genomic research and pioneering advanced healthcare services with agentic and generative AI. The convergence of AI, accelerated computing and biological data is turning healthcare into the largest technology industry. Healthcare leaders IQVIA, Illumina and Mayo Clinic, as well as Arc Institute, are using the latest NVIDIA technologies to develop solutions that will help advance human health.

These solutions include AI agents that can speed clinical trials by reducing administrative burden, AI models that learn from biology instruments to advance drug discovery and digital pathology, and physical AI robots for surgery, patient monitoring and operations. AI agents, AI instruments and AI robots will help address the $3 trillion of operations dedicated to supporting industry growth and create an AI factory opportunity in the hundreds of billions of dollars.

Morse Micro Intros New World-Beating Wi-Fi SoC - Smallest, Fastest & Farthest-Reaching

Morse Micro, the world's leading provider of Wi-Fi HaLow chips based on the IEEE 802.11ah specification, has announced the launch of its highly anticipated second-generation MM8108 System-on-Chip (SoC). Building on the success of the first-generation MM6108 SoC, the MM8108 offers even better performance in all key areas of range, throughput, and power efficiency while also reducing the cost, effort, and time to bring the next generation of Wi-Fi HaLow enabled products to market.

The MM8108 delivers class-leading data rates of up to 43.33 Mbps using world-first sub-GHz 256-QAM modulation at an 8 MHz bandwidth, making it ideal for a range of applications in agricultural, mining, industrial, home, and city environments. Its integrated 26dBm power amplifier (PA) with exceptional power efficiency, and low-noise amplifier (LNA) ensure exceptional performance and enable global regulatory certification without the need for external Surface Acoustic Wave (SAW) filters. The exceptional power efficiency significantly extends battery life and enables the uptake of solar-powered Wi-Fi HaLow connected cameras and IoT devices.

VeriSilicon Unveils Next-Gen Vitality Architecture GPU IP Series

VeriSilicon today announced the launch of its latest Vitality architecture Graphics Processing Unit (GPU) IP series, designed to deliver high-performance computing across a wide range of applications, including cloud gaming, AI PC, and both discrete and integrated graphics cards.

VeriSilicon's new generation Vitality GPU architecture delivers exceptional advancements in computational performance with scalability. It incorporates advanced features such as a configurable Tensor Core AI accelerator and a 32 MB to 64 MB Level 3 (L3) cache, offering both powerful processing power and superior energy efficiency. Additionally, the Vitality architecture supports up to 128 channels of cloud gaming per core, addressing the needs of high concurrency and high image quality cloud-based entertainment, while enabling large-scale desktop gaming and applications on Windows systems. With robust support for Microsoft DirectX 12 APIs and AI acceleration libraries, this architecture is ideally suited for a wide range of performance-intensive applications and complex computing workloads.

Google Announces Android XR

We started Android over a decade ago with a simple idea: transform computing for everyone. Android powers more than just phones—it's on tablets, watches, TVs, cars and more.

Now, we're taking the next step into the future. Advancements in AI are making interacting with computers more natural and conversational. This inflection point enables new extended reality (XR) devices, like headsets and glasses, to understand your intent and the world around you, helping you get things done in entirely new ways.

Amazon AWS Announces General Availability of Trainium2 Instances, Reveals Details of Next Gen Trainium3 Chip

At AWS re:Invent, Amazon Web Services, Inc. (AWS), an Amazon.com, Inc. company, today announced the general availability of AWS Trainium2-powered Amazon Elastic Compute Cloud (Amazon EC2) instances, introduced new Trn2 UltraServers, enabling customers to train and deploy today's latest AI models as well as future large language models (LLM) and foundation models (FM) with exceptional levels of performance and cost efficiency, and unveiled next-generation Trainium3 chips.

"Trainium2 is purpose built to support the largest, most cutting-edge generative AI workloads, for both training and inference, and to deliver the best price performance on AWS," said David Brown, vice president of Compute and Networking at AWS. "With models approaching trillions of parameters, we understand customers also need a novel approach to train and run these massive workloads. New Trn2 UltraServers offer the fastest training and inference performance on AWS and help organizations of all sizes to train and deploy the world's largest models faster and at a lower cost."
Return to Keyword Browsing
May 18th, 2025 13:24 +03 change timezone

New Forum Posts

Popular Reviews

Controversial News Posts

OSZAR »