<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Epium</title>
	<atom:link href="https://epium.com/feed/" rel="self" type="application/rss+xml" />
	<link>https://epium.com</link>
	<description></description>
	<lastBuildDate>Fri, 10 Apr 2026 18:14:02 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>

 
	<item>
		<title>Intel and SambaNova pitch modular inference architecture</title>
		<link>https://epium.com/news/intel-and-sambanova-pitch-modular-inference-architecture/</link>
		
		<dc:creator><![CDATA[AI News]]></dc:creator>
		<pubDate>Fri, 10 Apr 2026 18:13:48 +0000</pubDate>
				<category><![CDATA[artificial intelligence]]></category>
		<category><![CDATA[data-centers]]></category>
		<category><![CDATA[intel]]></category>
		<category><![CDATA[semiconductors]]></category>
		<category><![CDATA[artificial intelligence infrastructure]]></category>
		<category><![CDATA[inference]]></category>
		<category><![CDATA[rdus]]></category>
		<category><![CDATA[sambanova]]></category>
		<category><![CDATA[xeon 6]]></category>
		<guid isPermaLink="false">https://epium.com/?p=25767</guid>

					<description><![CDATA[Intel and SambaNova are positioning a mixed-hardware inference design as an alternative to GPU-only deployments. The approach splits prefill, decode, and orchestration across different processors for demanding Artificial Intelligence agent workloads.]]></description>
										<content:encoded><![CDATA[<p>Inference is becoming a central battleground for compute providers, with the industry increasingly shifting away from the idea that GPUs alone can dominate inference. Intel and SambaNova are now advancing a heterogeneous inference architecture that pairs SambaNova RDUs with Intel Xeon 6 CPUs, following a broader move toward disaggregated inference designs.</p>
<p>The configuration assigns GPUs to prefill workloads, Intel Xeon 6 processors to host, orchestration, and general-purpose tasks, and SambaNova RDUs to decode. SambaNova described the setup as a heterogeneous hardware solution that combines GPUs for prefill, Intel® Xeon® 6 processors as both host and “action” CPUs, and SambaNova RDUs for decode to deliver premium inference for the most demanding Agentic Artificial Intelligence applications. The partnership does not tie the design to a specific hyperscaler GPU option, leaving room for other accelerators or ASICs, although no detailed GPU-specific performance figures were provided.</p>
<p>SambaNova’s SN50 is the key accelerator in the design. The SN50 features 2TB of DDR5 memory, along with 64 GB HBM3 and 520 MB SRAM, and the company positions that combination as a way to deliver minimal latency, high throughput, and high capacity. SambaNova says the DRAM + SRAM + HBM combo creates ‘agentic caching’. Intel and SambaNova also say Xeon 6 CPUs were found to be well suited for “end‑to‑end coding agent workflows” compared to ARM options.</p>
<p>The pairing differs from NVIDIA’s approach by emphasizing a more modular and comparatively lower-commitment path to disaggregated inference infrastructure. The setup is framed as a practical option for hyperscalers that want rack-scale systems built around the “prefill + decode” split without committing to a tightly defined infrastructure stack. Intel’s role, at least for now, appears focused on providing the Xeon host CPU rather than deeper RDU integration.</p>
<p>The collaboration also reflects a broader relationship between the two companies. Intel’s CEO has participated in SambaNova’s latest funding round, and Lip-Bu is also an early investor in the company. Plans to acquire SambaNova were reportedly halted after a board disagreement, leaving Intel as a funding participant instead.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Global Artificial Intelligence governance pulls back</title>
		<link>https://epium.com/news/global-artificial-intelligence-governance-pulls-back/</link>
		
		<dc:creator><![CDATA[AI News]]></dc:creator>
		<pubDate>Fri, 10 Apr 2026 14:12:25 +0000</pubDate>
				<category><![CDATA[artificial intelligence]]></category>
		<category><![CDATA[european union]]></category>
		<category><![CDATA[policy]]></category>
		<category><![CDATA[regulation]]></category>
		<category><![CDATA[united states]]></category>
		<category><![CDATA[artificial intelligence regulation]]></category>
		<category><![CDATA[colorado]]></category>
		<category><![CDATA[eu ai act]]></category>
		<category><![CDATA[iso 42001]]></category>
		<category><![CDATA[nist ai risk management framework]]></category>
		<guid isPermaLink="false">https://epium.com/?p=25765</guid>

					<description><![CDATA[A broad pullback in Artificial Intelligence regulation is taking shape across Colorado, the European Union, Canada, the United Kingdom, and the United States. The shift reflects implementation gaps, competitive pressure, and resistance to heavy compliance burdens rather than the end of governance efforts.]]></description>
										<content:encoded><![CDATA[<p>Global Artificial Intelligence governance has shifted from rapid expansion to visible retrenchment. Colorado is moving to narrow its landmark state law, the EU is delaying key parts of the AI Act, Canada’s federal legislation has collapsed, the UK has declined to adopt a comprehensive statutory regime, and the Biden-era federal Artificial Intelligence framework in the United States has been revoked. Regulation is not disappearing, but the early model of broad, comprehensive governance is being reshaped by implementation difficulties, geopolitical competition, and business opposition to compliance costs.</p>
<p>Colorado offers the clearest example of rollback. SB 24-205, passed in May 2024, created a risk-based framework for high-risk Artificial Intelligence systems. A later special session produced a five-month delay, pushing the effective date from February 1 to June 30, 2026. In March 2026, a working group released a draft repeal-and-replace proposal centered on automated decision-making technology in consequential decisions. The replacement would remove the original law’s duty of reasonable care, mandatory impact assessments, formal risk management programs, annual reviews, and attorney general reporting requirements. In their place, it would use a lighter notice-and-rights model focused on disclosure, access, correction, recordkeeping, and meaningful human review, while relying on existing civil rights and consumer protection law for discrimination claims. At the same time, the new definition of covered ADMT is broader in one respect because it can reach screening, scoring, ranking, and routing tools if they materially influence outcomes. Colorado’s draft replacement has not yet been enacted, and the June 30 effective date still looms. The session ends in May.</p>
<p>The EU is delaying, not abandoning, its framework. High-risk Artificial Intelligence system requirements were set to apply beginning August 2, 2026. The Digital Omnibus would tie that timeline to standards and compliance tools that are still unfinished, with backstop dates of December 2, 2027 for standalone high-risk systems and August 2, 2028 for Artificial Intelligence embedded in regulated products. That creates a potential 24-month delay for the provisions with the greatest operational impact. The package also narrows documentation duties, expands simplifications beyond SMEs, limits some database registration obligations, and shifts Artificial Intelligence literacy responsibilities toward the Commission and member states. The result is a clear reduction in the compliance burden even though the AI Act remains in force.</p>
<p>The wider pattern extends beyond Colorado and the EU. Canada’s Bill C-27 died when parliament was prorogued in January 2025, leaving no binding federal Artificial Intelligence law. In the UK, officials confirmed in March 2026 that there is no comprehensive bill, reflecting a lighter-touch strategy aimed at competing on adoption. In the EU, the AI Liability Directive was withdrawn in February 2025, removing a parallel civil liability mechanism. In the U.S., Executive Order 14110 was revoked on President Trump’s first day in office, while later executive actions promoted a more innovation-first approach and challenged some state-level governance efforts. The Senate voted 99-1 in July 2025 to remove a proposed 10-year moratorium on new state Artificial Intelligence laws, showing that broad federal preemption still lacks support.</p>
<p>Even with these reversals, governance pressures remain strong through sector-specific rules, enforcement, contracts, procurement, and litigation. Existing laws such as HIPAA, ECOA, the Fair Housing Act, and Title VII still apply to Artificial Intelligence decision-making in their domains, and federal agencies continue to treat Artificial Intelligence conduct as an enforcement priority. The most durable compliance anchor in this environment is standards-based governance. The NIST AI Risk Management Framework, released in January 2023, was voluntary at first, but within 18 months it appeared in executive orders, state legislation, and federal contractor requirements. ISO 42001 offers a certifiable management-system model that can support multinational operations across different jurisdictions. In a volatile legal landscape, standards-based programs are presented as a more stable investment than building only for rules that may be delayed, amended, or repealed.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Fannie Mae sets governance framework for Artificial Intelligence and machine learning use</title>
		<link>https://epium.com/news/fannie-mae-sets-governance-framework-for-artificial-intelligence-and-machine-learning-use/</link>
		
		<dc:creator><![CDATA[AI News]]></dc:creator>
		<pubDate>Fri, 10 Apr 2026 10:24:10 +0000</pubDate>
				<category><![CDATA[artificial intelligence]]></category>
		<category><![CDATA[fannie mae]]></category>
		<category><![CDATA[machine learning]]></category>
		<category><![CDATA[mortgages]]></category>
		<category><![CDATA[regulation]]></category>
		<category><![CDATA[mortgage servicing]]></category>
		<category><![CDATA[seller servicers]]></category>
		<guid isPermaLink="false">https://epium.com/?p=25763</guid>

					<description><![CDATA[Fannie Mae issued Lender Letter LL-2026-04 outlining a governance framework for Seller/Servicers using Artificial Intelligence and machine learning in origination and servicing. The guidance was published April 8, 2026.]]></description>
										<content:encoded><![CDATA[<p>Fannie Mae issued Lender Letter LL-2026-04 to provide a governance framework for Seller/Servicers using Artificial Intelligence and/or machine learning in their origination and/or servicing practices. The guidance signals a formal structure for how these technologies are addressed within Single-Family operations.</p>
<p>The letter is presented as policy guidance for Seller/Servicers and is focused specifically on the use of Artificial Intelligence and machine learning in mortgage origination and servicing. It frames governance as the central requirement for institutions applying these technologies in operational workflows tied to Fannie Mae business.</p>
<p>The notice was published April 8, 2026. It appears in the Single-Family News Center and is accompanied by a downloadable lender letter. Fannie Mae also listed the item among its recent news on the same date, alongside Announcement SVC-2026-03 &#8211; Servicing Guide Update and an Update to UCD timeline and v2.0 Specification resources published April 2, 2026.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>OpenAI outlines policy ideas for the Artificial Intelligence labor transition</title>
		<link>https://epium.com/news/openai-outlines-policy-ideas-for-the-artificial-intelligence-labor-transition/</link>
		
		<dc:creator><![CDATA[AI News]]></dc:creator>
		<pubDate>Fri, 10 Apr 2026 06:14:11 +0000</pubDate>
				<category><![CDATA[artificial intelligence policy]]></category>
		<category><![CDATA[enterprise technology]]></category>
		<category><![CDATA[labor]]></category>
		<category><![CDATA[openai]]></category>
		<category><![CDATA[enterprise workers]]></category>
		<category><![CDATA[labor transition]]></category>
		<category><![CDATA[tax policy]]></category>
		<guid isPermaLink="false">https://epium.com/news/openai-outlines-policy-ideas-for-the-artificial-intelligence-labor-transition/</guid>

					<description><![CDATA[OpenAI has published a policy proposal focused on how Artificial Intelligence could reshape work, wages and job quality. The document argues for stronger worker input, tax changes and targeted funding, while critics say it stops short of real accountability.]]></description>
										<content:encoded><![CDATA[<p>OpenAI has released “Industrial Policy for the Intelligence Age,” a policy proposal aimed at the effects of Artificial Intelligence on workers and the broader economy. The document shows the company is thinking not only about how superintelligence could affect consumers, but also how it could reshape enterprise jobs. OpenAI envisions an Artificial Intelligence workforce transition in which workers have a voice in how these systems are introduced, with deployments prioritized when they improve job quality.</p>
<p>The proposal says workers will be critical to understanding how Artificial Intelligence is used in workplaces, and it calls for investment to offset Artificial Intelligence’s effects on work, wages and job quality across industries and sectors. OpenAI also floated initiatives such as a four-day work week. The company framed the effort as a response to a shifting labor market in which some employees in legal and technology roles are already feeling pressure from Artificial Intelligence agents that perform coding and knowledge-based tasks such as summarization and data gathering. Oracle, for example, eliminated as many as 30,000 roles globally this month as it works to restructure and become an Artificial Intelligence compute provider.</p>
<p>OpenAI also suggested changes to tax policy, including higher corporate income and capital gains taxes, particularly on Artificial Intelligence-related revenue, alongside lower or eliminated taxes on labor income. Michael Bennett, associate vice chancellor for data science and Artificial Intelligence strategy at the University of Illinois Chicago, said the proposal appears designed to show users, society and current and future employees that OpenAI recognizes employment disruption. At the same time, Chirag Shah, a professor in the Information School at the University of Washington, argued that the proposals are not substantive enough and said the company is not taking real responsibility for the consequences of pushing toward superintelligence.</p>
<p>Bennett also said the policy may serve as a protective measure for OpenAI by showing it has considered economic, environmental, labor, education and political consequences before those issues intensify. That is especially relevant for companies like OpenAI that are looking to go public, to do an IPO. Alongside its policy ideas, OpenAI is launching fellowships and research grants of up to 100,000 and up to 1 million in API credits for projects that explore the economic models described in the proposal.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Anthropic launches Claude Mythos for Project Glasswing</title>
		<link>https://epium.com/news/anthropic-launches-claude-mythos-for-project-glasswing/</link>
		
		<dc:creator><![CDATA[AI News]]></dc:creator>
		<pubDate>Fri, 10 Apr 2026 00:20:35 +0000</pubDate>
				<category><![CDATA[anthropic]]></category>
		<category><![CDATA[artificial intelligence]]></category>
		<category><![CDATA[cybersecurity]]></category>
		<category><![CDATA[software security]]></category>
		<category><![CDATA[threat intelligence]]></category>
		<category><![CDATA[agentic artificial intelligence]]></category>
		<category><![CDATA[claude mythos]]></category>
		<category><![CDATA[project glasswing]]></category>
		<category><![CDATA[zero-day vulnerabilities]]></category>
		<guid isPermaLink="false">https://epium.com/?p=25747</guid>

					<description><![CDATA[Anthropic has introduced Claude Mythos Preview, a new frontier Artificial Intelligence model positioned as a major advance in cybersecurity capability. The model is being used to power Project Glasswing, a coalition effort to secure critical software before similar capabilities spread more widely.]]></description>
										<content:encoded><![CDATA[<p>Anthropic has officially announced Mythos Preview as the foundation for Project Glasswing, describing the model as a major leap beyond its existing Haiku, Sonnet, and Opus systems. Mythos sits in a fourth tier called Copybara and is presented as superior to other frontier Artificial Intelligence models. Anthropic says its cyber performance comes from strong agentic coding and reasoning abilities, giving it leading results across software coding tasks.</p>
<p>The company is framing the launch around both defensive potential and serious misuse risk. In the last few weeks, Mythos Preview has identified thousands of zero-day vulnerabilities with many classified as critical. Several are ten or 20 years old, including a 27-years old bug in OpenBSD. A 16-years old vulnerability in video software had reportedly survived five million hits from other automated testing tools without detection. Anthropic also says the model autonomously found and chained together several flaws in the Linux kernel, allowing escalation from ordinary user access to complete control of the machine.</p>
<p>Anthropic argues that this level of capability could enable cyberattacks that move too quickly and become too sophisticated for defenders to stop. The concern builds on a case the company disclosed in November 2025 involving what it called the first reported Artificial Intelligence-orchestrated cyber espionage campaign. Anthropic said it detected suspicious activity in mid-September 2025 and later assessed with high confidence that a Chinese state-sponsored group had manipulated Claude Code, using agentic capabilities not just for advice but to carry out cyberattacks.</p>
<p>Project Glasswing is intended as a defensive response before such tools proliferate more broadly. Anthropic says Claude Mythos Preview is a general-purpose, unreleased frontier model that has completed training, but the firm does not plan to make Mythos Preview generally available. The initiative brings together Amazon, Anthropic, Apple, Broadcom, Cisco, CrowdStrike, the Linux Foundation, Microsoft, and Palo Alto Networks to help secure critical software. Anthropic says the work of defending cyber infrastructure might take years, while frontier Artificial Intelligence capabilities are likely to advance substantially over just the next few months.</p>
<p>Anthropic is also extending access beyond the main partnership to more than 40 other organizations that build or maintain critical software so they can scan and secure first-party and open-source systems. Microsoft, Cisco, CrowdStrike, and the Linux Foundation each described the effort as urgent and collaborative, with a particular focus on giving defenders and open-source maintainers new ways to identify and fix vulnerabilities at scale before malicious actors gain the same capabilities.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Softr launches Artificial Intelligence no-code platform for business teams</title>
		<link>https://epium.com/news/softr-launches-artificial-intelligence-no-code-platform-for-business-teams/</link>
		
		<dc:creator><![CDATA[AI News]]></dc:creator>
		<pubDate>Thu, 09 Apr 2026 23:06:47 +0000</pubDate>
				<category><![CDATA[artificial intelligence]]></category>
		<category><![CDATA[crm]]></category>
		<category><![CDATA[digital transformation]]></category>
		<category><![CDATA[no-code]]></category>
		<category><![CDATA[saas]]></category>
		<category><![CDATA[automation]]></category>
		<category><![CDATA[business software]]></category>
		<category><![CDATA[softr]]></category>
		<guid isPermaLink="false">https://epium.com/?p=25759</guid>

					<description><![CDATA[Softr has introduced an Artificial Intelligence-native no-code platform aimed at non-technical teams building business software. The company is targeting the gap between fast prototypes and systems that can support live operations with real data, permissions and security.]]></description>
										<content:encoded><![CDATA[<p>Softr has launched an Artificial Intelligence-native no-code platform for building business software for non-technical teams. The update adds an Artificial Intelligence Co-Builder that allows users to describe an application in plain language and receive a working system with a database, user interface, permissions and business logic. The focus is on software for day-to-day business use rather than early-stage mock-ups.</p>
<p>The Berlin-based company has operated in the no-code software market for the past five years. Since launching in 2020, it says it has grown to more than 1 million builders and 7,000 organisations, including Netflix, Google, Stripe, UPS and Clay. Softr is positioning the new platform around what it sees as a common weakness in Artificial Intelligence software tools: many can generate quick surface-level outputs from a prompt, but still require users to handle code, fix errors and rebuild workflows before the software is ready for production.</p>
<p>That distinction is especially important for internal tools and customer-facing systems that depend on live data, defined user roles and access controls. Softr says its platform includes authentication, user roles, permissions and hosting from the outset. It also offers a visual database, custom workflows and integrations with other tools, aiming to make applications easier for non-technical teams to maintain over time without returning to developers for routine changes.</p>
<p>Teams already use Softr for client portals, customer relationship management systems, company intranets and other operational tools across industries. The latest product move extends that model by using Artificial Intelligence to assemble more of the underlying structure automatically. Softr says users can request a specific business tool and receive core elements that connect to live data and can be used immediately by staff, customers or partners, depending on the use case.</p>
<p>The launch reflects a broader push in business software to serve employees outside information technology departments, including operations, finance, human resources and client teams. Softr argues that no-code tools must address security, governance and reliability to move beyond experimentation. The company also says it has a profitable base, which it is now combining with Artificial Intelligence as it expands the product. Softr did not disclose pricing or financial details at launch.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Artificial Intelligence speeds quantum encryption threat timeline</title>
		<link>https://epium.com/news/artificial-intelligence-speeds-quantum-encryption-threat-timeline/</link>
		
		<dc:creator><![CDATA[AI News]]></dc:creator>
		<pubDate>Thu, 09 Apr 2026 18:15:31 +0000</pubDate>
				<category><![CDATA[artificial intelligence]]></category>
		<category><![CDATA[cybersecurity]]></category>
		<category><![CDATA[google]]></category>
		<category><![CDATA[quantum-computing]]></category>
		<category><![CDATA[encryption]]></category>
		<category><![CDATA[oratomic]]></category>
		<category><![CDATA[quantum computing]]></category>
		<guid isPermaLink="false">https://epium.com/?p=25745</guid>

					<description><![CDATA[Research from Google and Oratomic suggests quantum computers capable of breaking core internet encryption may arrive sooner than expected. Artificial Intelligence played a key role in improving one of the new algorithms, raising fresh urgency around post-quantum security.]]></description>
										<content:encoded><![CDATA[<p>New research from Google and quantum startup Oratomic suggests quantum computers capable of breaking the encryption protocols that secure the internet may arrive sooner than expected. Cybersecurity researchers described the results as a major warning for the internet’s security timeline, and Cloudflare said it was accelerating its deadline to prepare for quantum computers to 2029. The U.S. National Institute for Standards and Technology has set a 2035 deadline to prepare for their arrival, but multiple quantum computing experts said the combined Google and Oratomic results could significantly shorten the development time of a quantum computer that threatens encryption.</p>
<p>Quantum computers use qubits to perform some calculations far faster than ordinary computers, creating a long-term threat to systems that depend on encryption. Everything from private messages to classified documents relies on the fact that conventional machines would take vastly longer than practical timescales to crack them, while a quantum computer could theoretically do the same work in days. A 2025 survey found a 39% chance of this changing in the next decade, as quantum hardware improves and algorithms become more efficient. Researchers warned that if quantum machines arrive before post-quantum protections are fully deployed, the risks could include data leaks, extortion, and businesses being taken offline.</p>
<p>Artificial Intelligence was described by Oratomic’s authors as central to the development of their algorithm. In atomic quantum computers, it can take 100 to 1,000 atoms to encode a single qubit. But the algorithm found by the Oratomic researchers requires just three atoms to encode a qubit, reducing the number of particles required to build an atomic quantum computer by 100 times. Initially, the performance of the team’s key algorithms was about 1,000 times worse, and researchers said the approach would not have worked in that state. After turning to OpenEvolve, an open-source tool that uses large language models including Google’s Gemini and Anthropic’s Claude, the team said Artificial Intelligence generated useful ideas by combining earlier scientific results in a novel way and exploring thousands of possibilities.</p>
<p>The work remains preliminary. The paper has not yet been peer-reviewed, and some experts said several assumptions in the research remain untested. The authors said many open challenges still stand between the current findings and a dangerous quantum computer. Even so, the results have already prompted attention from industry and government. Members of the Oratomic team briefed U.S. government officials before publication, and Google has also moved to expand its own atomic quantum computing effort while publicly outlining plans to secure its systems against quantum computers by 2029.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>New methods aim to improve Large Language Model reasoning</title>
		<link>https://epium.com/news/new-methods-aim-to-improve-large-language-model-reasoning/</link>
		
		<dc:creator><![CDATA[AI News]]></dc:creator>
		<pubDate>Thu, 09 Apr 2026 15:33:45 +0000</pubDate>
				<category><![CDATA[large language models]]></category>
		<category><![CDATA[machine learning]]></category>
		<category><![CDATA[research]]></category>
		<category><![CDATA[arxiv]]></category>
		<category><![CDATA[coding benchmarks]]></category>
		<category><![CDATA[hallucinations]]></category>
		<category><![CDATA[reasoning]]></category>
		<guid isPermaLink="false">https://epium.com/?p=25757</guid>

					<description><![CDATA[A new study on arXiv outlines algorithmic techniques designed to strengthen Large Language Model reasoning and reduce hallucinations. The work reports better logical consistency and stronger performance on mathematical and coding benchmarks.]]></description>
										<content:encoded><![CDATA[<p>A study published on arXiv presents new techniques to improve the reasoning capabilities of Large Language Models. The work targets two persistent weaknesses in these systems: hallucinations and inconsistent logic during complex problem-solving.</p>
<p>The research centers on new algorithmic frameworks intended to make model outputs more dependable. The goal is to reduce false or unsupported responses while improving logical consistency when models handle challenging reasoning tasks.</p>
<p>Researchers reported significant gains in model reliability across mathematical and coding benchmarks. The findings suggest that more structured approaches to inference can improve accuracy and make Large Language Models more effective in tasks that require step-by-step reasoning.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Nvidia acquisition of SchedMD raises Slurm neutrality concerns</title>
		<link>https://epium.com/news/nvidia-acquisition-of-schedmd-raises-slurm-neutrality-concerns/</link>
		
		<dc:creator><![CDATA[AI News]]></dc:creator>
		<pubDate>Thu, 09 Apr 2026 12:17:14 +0000</pubDate>
				<category><![CDATA[artificial intelligence]]></category>
		<category><![CDATA[hardware]]></category>
		<category><![CDATA[nvidia]]></category>
		<category><![CDATA[open source]]></category>
		<category><![CDATA[supercomputing]]></category>
		<category><![CDATA[schedmd]]></category>
		<category><![CDATA[slurm]]></category>
		<guid isPermaLink="false">https://epium.com/?p=25743</guid>

					<description><![CDATA[Nvidia's purchase of SchedMD has given it control of Slurm, an open-source scheduler that sits at the center of many supercomputing and large-model training systems. Researchers and engineers are watching for signs that support could tilt toward Nvidia hardware over AMD and Intel alternatives.]]></description>
										<content:encoded><![CDATA[<p>Nvidia’s December acquisition of SchedMD has handed it control of Slurm, the open-source scheduling software that runs around 60% of the world’s supercomputers and underpins large-language-model training workloads at labs including Anthropic, Meta and Mistral. The deal has raised concern among Artificial Intelligence specialists and high performance computing engineers that Nvidia could gradually shape a critical layer of infrastructure to favor its own chips and networking technology.</p>
<p>Slurm is presented as a strategic control point because it turns clusters of GPUs into usable systems for supercomputing and model training. Its role spans workloads such as weather forecasting, nuclear weapons design, and frontier model development, making vendor neutrality especially important. One key test will be how quickly Nvidia integrates AMD’s upcoming chips into Slurm compared with how quickly it adds support for its own InfiniBand networking and other Nvidia-specific hardware. Intersect360 Research CEO Addison Snell warned that Nvidia “could take what’s a common open-source tool and make it so that it works better or exclusively for its own parts.”</p>
<p>The concern is reinforced by Nvidia’s 2022 acquisition of Bright Computing, a cluster-management company. Artificial Intelligence industry sources cited by Reuters said Bright’s software became “optimised for Nvidia, creating a performance penalty for users of other chips without additional work”. Nvidia disputed that characterization and said Bright supports “nearly any” CPU or GPU cluster. The latest acquisition has therefore prompted scrutiny over whether a similar pattern could emerge around Slurm.</p>
<p>OpenAI is noted as an exception because it does not use Slurm and instead relies on Google-derived scheduling. That limits Nvidia’s leverage to the broader frontier lab and high performance computing ecosystem rather than the entire industry. For universities, national supercomputing facilities, and enterprises running mixed-vendor GPU clusters, the immediate issue is contingency planning. Slurm remains open-source, so a fork is technically possible, but “it takes effort to produce fully working software”, making governance and development patterns important signals to monitor.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Mustafa Suleyman says Artificial Intelligence compute growth is still accelerating</title>
		<link>https://epium.com/news/mustafa-suleyman-says-artificial-intelligence-compute-growth-is-still-accelerating/</link>
		
		<dc:creator><![CDATA[AI News]]></dc:creator>
		<pubDate>Thu, 09 Apr 2026 07:12:14 +0000</pubDate>
				<category><![CDATA[artificial intelligence]]></category>
		<category><![CDATA[hardware]]></category>
		<category><![CDATA[machine learning]]></category>
		<category><![CDATA[microsoft]]></category>
		<category><![CDATA[chips]]></category>
		<category><![CDATA[compute]]></category>
		<category><![CDATA[energy]]></category>
		<category><![CDATA[gpus]]></category>
		<guid isPermaLink="false">https://epium.com/news/mustafa-suleyman-says-artificial-intelligence-compute-growth-is-still-accelerating/</guid>

					<description><![CDATA[Mustafa Suleyman argues that Artificial Intelligence development is being propelled by simultaneous advances in chips, memory, networking, and software efficiency rather than nearing a hard limit. He contends that rising compute capacity and falling deployment costs will push systems beyond chatbots toward more capable agents.]]></description>
										<content:encoded><![CDATA[<p>Mustafa Suleyman frames modern Artificial Intelligence progress as an exponential phenomenon that runs counter to human intuition about linear change. From the time he began work on Artificial Intelligence in 2010 to now, the amount of training data that goes into frontier Artificial Intelligence models has grown by a staggering 1 trillion times, from roughly 10¹⁴ flops for early systems to over 10²⁶ flops for today’s largest models. He presents that jump as the central force behind recent advances and rejects recurring claims that development is close to hitting a wall.</p>
<p>He argues that the acceleration is coming from several layers of the computing stack at once. Nvidia’s chips have delivered an over sevenfold increase in raw performance in just six years, from 312 teraflops in 2020 to 2,250 teraflops today. Microsoft’s Maia 200 chip, launched this January, delivers 30% better performance per dollar than any other hardware in the company’s fleet. HBM3 triples the bandwidth of its predecessor, while interconnect technologies such as NVLink and InfiniBand link hundreds of thousands of GPUs into warehouse-size supercomputers. Where training a language model took 167 minutes on eight GPUs in 2020, it now takes under four minutes on equivalent modern hardware. Moore’s Law would predict only about a 5x improvement over this period. He says the industry saw 50x instead, and notes a shift from two GPUs training AlexNet in 2012 to over 100,000 GPUs in today’s largest clusters.</p>
<p>Suleyman also points to software gains that are making models cheaper to train and serve. Research from Epoch AI suggests that the compute required to reach a fixed performance level halves approximately every eight months, much faster than the traditional 18-to-24-month doubling of Moore’s Law. The costs of serving some recent models have collapsed by a factor of up to 900 on an annualized basis. He says those trends indicate that Artificial Intelligence is becoming radically cheaper to deploy even as capabilities improve.</p>
<p>Looking ahead, he describes a continued surge rather than a slowdown. Leading labs are growing capacity at nearly 4x annually, and since 2020, the compute used to train frontier models has grown 5x every year. Global Artificial Intelligence-relevant compute is forecast to hit 100 million H100-equivalents by 2027, a tenfold increase in three years. He says that could amount to another 1,000x in effective compute by the end of 2028, and that by 2030 an additional 200 gigawatts of compute could come online every year. He links that scale to a transition from chatbots to nearly human-level agents capable of carrying out extended, semiautonomous work across industries, while acknowledging energy as the clearest constraint. A single refrigerator-size Artificial Intelligence rack consumes 120 kilowatts, equivalent to 100 homes, but he argues that falling solar and battery costs create a path for cleaner scaling.</p>
]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
