When you purchase through links on our site, we may earn an affiliate commission. At around 5 a.m. local time, an undersea telecommunications cable between Estonia and Finland was damaged for the fourth time in roughly 1.5 years. Finnish special forces have taken control of the cargo ship Fitburg, detained its 14-member crew, and revealed that they were citizens of Russia, Georgia, Kazakhstan, and Azerbaijan. Elisa, a leading telecom provider in Estonia and Finland and the cable's owner, alerted authorities under standard incident protocols at around 5 a.m. Officials emphasize that the communications infrastructure between Estonia and neighboring countries is very redundant: Estonia is connected abroad via 12 international cables, so the loss of individual links does not translate into systemic outages. “We could talk about a critical situation only if just one cable were still operational, but at the moment we have a significant margin,” said Liisa Pakosta, Estonia's justice and digital affairs minister. “It is also worth noting that such breakdowns are usually not even reported, because they occur fairly often. One of the cables runs between Läänemaa and Hiiumaa — it is not part of these 12 and is a local cable. Automatic Identification System (AIS) records from MarineTraffic indicate that near the route of Elisa's submarine cable, Fitburg slowed from 8.9 to 7.3 knots, with a multi-minute data gap suggesting speed may have dropped even further. Safety management and commercial operations of the vessel are handled by Istanbul-based Sarfo Denizcilik ve Ticaret A.Ş and Albros Shipping & Trading Co. For now, the actual beneficiaries of Fitburg Shipping Co Ltd and Albros are unknown. In just 1.5 years, cables and pipelines in the Gulf of Finland were damaged three other times: On October 8, 2023 the Balticconnector gas pipeline, along with multiple telco cables between Estonia and Finland were damaged. On November 17, 2024, three undersea communications cables between Sweden and Lithuania were broken. More broadly, there was about a half-dozen incidents involving damage of underwater cables and pipelines in the Baltic Sea in recent years. Follow Tom's Hardware on Google News, or add us as a preferred source, to get our latest news, analysis, & reviews in your feeds. Get Tom's Hardware's best news and in-depth reviews, straight to your inbox. Anton Shilov is a contributing writer at Tom's Hardware. Tom's Hardware is part of Future US Inc, an international media group and leading digital publisher. © Future US, Inc. Full 7th Floor, 130 West 42nd Street, New York,
Tade Oyerinde and Teddy Solomon know a few things about building communities that last. The two spoke at TechCrunch Disrupt this year, breaking down the strategies that helped them scale their companies while retaining consumer interest. Campus offers associate degrees in areas like information technology and business administration. It also offers certificates in specialities like cosmetology and phlebotomy. There are more than 3,000 students enrolled in Campus, and it employs more than 100 professors on at least a part-time basis, Oyerinde says. Oyerinde said Campus decided to launch à la carte courses since employers, in particular, have been asking for classes that can teach their employees individual skills like vibe coding. “Everyone in this room, not just two-year degree-seeking people, will be able to go to Campus and learn with us,” he told the audience. He also has a team of billionaires on his company's cap table — like OpenAI's Sam Altman and Discord's Jason Citroen — meaning he doesn't feel much pressure to focus on profits above all else, he said. Fizz, meanwhile, operates on more than 200 college campuses and at one point operated in high schools across the country. It has raised more than $40 million with investors including Owl Ventures and NEA. Since launching in 2021, Solomon said the company had adopted features like a peer-to-peer marketplace that's listed more than 100,000 items, and a video element so people can write more than text posts. “We've already worked with companies like Perplexity,” he said. “There are subscription models that have worked well with apps, but right now we're focused on our ads business, and we're focused on building a great product that keeps our users around and makes them happy.” Dominic-Madori Davis is a senior venture capital and startup reporter at TechCrunch. Meta just bought Manus, an AI startup everyone has been talking about Sauron, the high-end home security startup for ‘super premium' customers, plucks a new CEO out of Sonos The Google Pixel Watch 4 made me like smartwatches again NY Governor Hochul signs bill requiring warning labels on ‘addictive' social media How reality crushed Ÿnsect, the French startup that had raised over $600M for insect farming
This seems really similar to the motivations around masked language modeling. By providing increasingly-masked targets over time, a smooth difficulty curve can be established. Randomly masking X% of the tokens/bytes is trivial to implement. MLM can take a small corpus and turn it into an astronomically large one. paper to solve Rubik cube.Start with solved state and teach the network successively harder states. Start with solved state and teach the network successively harder states. The happy Tetris bug is also a neat example of how “bad” inputs can act like curriculum or data augmentation. Corrupted observations forced the policy to be robust to chaos early, which then paid off when the game actually got hard. That feels very similar to tricks in other domains where we deliberately randomize or mask parts of the input. It makes me wonder how many surprisingly strong RL systems in the wild are really powered by accidental curricula that nobody has fully noticed or formalized yet.
When you purchase through links on our site, we may earn an affiliate commission. A team from Cardiff, Wales, is experimenting with the feasibility of building semiconductors in space, and its most recent success is another step forward towards its goal. “This is so important because it's one of the core ingredients that we need for our in-space manufacturing process,” Payload Operations Lead Veronica Vera told the BBC. “So being able to demonstrate this is amazing.” Semiconductor manufacturing is a costly and labor-intensive endeavor on Earth, and while putting it in orbit might seem far more complicated, making chips in space offers some theoretical advantages. “The work that we're doing now is allowing us to create semiconductors up to 4,000 times purer in space than we can currently make here today,” Space Forge CEO Josh Western told the publication. Space Forge launched its first satellite in June 2025, hitching a ride on the SpaceX Transporter-14 rideshare mission. However, it still took the company several months before it finally succeeded in turning on its furnace, showing how complicated this project can get. Nevertheless, this advancement is quite promising, with Space Forge planning to build a bigger space factory with the capacity to output 10,000 chips. Other companies are also experimenting with orbital fabs, with U.S. startup Besxar planning to send “Fabships” into space on Falcon 9 booster rockets. Putting semiconductor manufacturing in space could help reduce the massive amounts of power and water that these processes require from our resources while also outputting more wafers with fewer impurities. Follow Tom's Hardware on Google News, or add us as a preferred source, to get our latest news, analysis, & reviews in your feeds. Get Tom's Hardware's best news and in-depth reviews, straight to your inbox. Jowi Morales is a tech enthusiast with years of experience working in the industry. Tom's Hardware is part of Future US Inc, an international media group and leading digital publisher.
This article feels more like paid publicity than it does journalism If there are enough opportunities to offload stock on the secondary market (which seems to be the case of them), then it's not fiction. If not, I am to assume this isn't true, and that they are functionally non liquid possible assets at the discretion of OpenAI to sell Until traditional RSUs that once they are vested you can sell them, with few exceptions When I see this technology improve and free the lives of those whose salary is akin to slavery, then I might reconsider.Context: I've been reading about the Mondragon Corporation, and it seems a much better model than this maximum extraction economy we are building. Context: I've been reading about the Mondragon Corporation, and it seems a much better model than this maximum extraction economy we are building. The way Altman and others want AI to develop, this is what they're working toward too Maybe a wild round of mergers & acquisitions, combined with regulatory capture and some monopoly will be what settles everything. Probably with a crash in the middle of it all. That's a good milestone.Not really a fan of Altman, but I don't mind the competition he brings to the landscape. But, one thing has been consistent for the past 3 years: After every release from all the serious competitors, the hype can go either way.As far as the hype cycles go, OpenAI is oscillating between "Best model ever" and "What a letdown, it's over" at least twice a year.The competition is fierce, and a never-ending marathon of all the players getting ahead just a bit. Anthropic is focusing on developers as a clear target, and Gemini has the backing of Google.I don't see OpenAI winning the AI race with marginally better models and arguably a nicer UI/UX (ymmv, but I do like the ChatGPT app experience).That said, my usage decreases month over month. I don't see OpenAI winning the AI race with marginally better models and arguably a nicer UI/UX (ymmv, but I do like the ChatGPT app experience).That said, my usage decreases month over month. It usually ends in blood and tears, for both employees and investors.BUT: the SOTA has been greatly advanced, which matters a great deal more than the destiny of a particular corporation or the social status of sam-i-am.So, overall: good news. https://www.levels.fyi/blog/openai-compensation.htmlMight change how you evaluate the value here. Might change how you evaluate the value here. When you work at BigCorp for an extended period of time, your salary often ends up being majority by RSU as the vest rolls start to stack up Until then it's just a theoretical number on paper, which tends to end up being worth a lot less than originally advertised/hoped.I've lost track of the number of times that someone's startup got acquired for (insert what sounds like a big number) and everyone is like “wow the employees must all be rich” only to find out later that after preferred cap tables and other terms the employees got very little.A lot could happen here, but history says “watch this space” on this stock-based comp. Some options on the secondary markets but that only works as long as OpenAI can convince more people to dump money on the burning pile of cash they have going at the moment. I've lost track of the number of times that someone's startup got acquired for (insert what sounds like a big number) and everyone is like “wow the employees must all be rich” only to find out later that after preferred cap tables and other terms the employees got very little.A lot could happen here, but history says “watch this space” on this stock-based comp. Some options on the secondary markets but that only works as long as OpenAI can convince more people to dump money on the burning pile of cash they have going at the moment. Some options on the secondary markets but that only works as long as OpenAI can convince more people to dump money on the burning pile of cash they have going at the moment.
When you purchase through links on our site, we may earn an affiliate commission. This month, Nvidia rolled out what might be one of the most important updates for its CUDA GPU software stack in years. By shifting to structured data blocks, or tiles, Nvidia is changing how developers design GPU workloads, setting the stage for next-generation architectures that will incorporate more specialized compute accelerators and therefore depend less on thread-level parallelism. In the original CUDA model, programming is based on SIMT (single-instruction, multiple-thread) execution. Performance depends heavily on low-level decisions such as warp usage, shared-memory tiling, register usage, and the explicit use of tensor-core instructions or libraries. The developer describes computations in terms of operations on tiles — structured blocks of data such as submatrices — without specifying threads, warps, or execution order. Then the compiler and runtime automatically map those tile operations onto threads, tensor cores, tensor memory accelerators (TMA), and the GPU memory hierarchy. This means the programmer focuses on what computation should happen to the data, while CUDA determines how it runs efficiently on the hardware, which ensures performance scalability across GPU generations, starting with Blackwell and extending to future architectures. But why introduce such significant changes at the CUDA level? Firstly, AI, simulation, and technical computing no longer revolve around scalar operations: they rely on dense tensor math. Secondly, Nvidia's recent hardware has also followed the same trajectory, integrating tensor cores and TMAs as core architectural enhancements. Thirdly, both tensor cores and TMAs differ significantly between architectures. From Turing (the first GPU architecture to incorporate tensor units as assisting units) to Blackwell (where tensors became the primary compute engines), Nvidia has repeatedly reworked how tensor engines are scheduled, how data is staged and moved, and how much of the execution pipeline is managed by warps and threads versus dedicated hardware. As a result, as tensor hardware has been scaling aggressively, the lack of uniformity across generations has made low-level tuning on warp and thread levels impractical, so Nvidia had to elevate CUDA toward higher-level abstractions that describe intent at the tile level, rather than at the thread level, leaving all the optimizations to compilers and runtimes. One bonus to this approach is that it can extract performance gains across virtually all workloads throughout the active life cycle of its GPU architectures. Note that it does not abandon SIMT paths with NVVM/LLVM and PTX altogether; when developers need them, they can write appropriate kernels. In the traditional CUDA stack, PTX serves as a portable abstraction for thread-oriented programs that ensures that SIMT kernels persist across GPU generations. CUDA Tile IR is designed to provide that same long-term stability for tile-based computations: it defines tile blocks, their relationships, operations that transform them, but hides execution details that can change from one GPU family to another. This virtual ISA also becomes the target for compilers, frameworks, and domain-specific languages that want to exploit tile-level semantics. The runtime takes Tile IR as input and assigns work to hardware pipelines, tensor engines, and memory systems in a way that maximizes performance without exposing device-level variability to the programmer. For now, development efforts are focused primarily on AI-centric algorithms, but Nvidia plans to expand functionality, features, and performance over time, as well as to introduce a C++ implementation in upcoming releases. As Nvidia's CUDA Tile evolves, it can be applied to a wide range of applications, including scientific simulations (on architectures that support the required precision), signal and image/video processing, and many HPC workloads that decompose problems into block-based computations. In its initial release, CUDA Tile support is limited to Blackwell-class GPUs with compute capabilities 10.x and 12.x, but future releases will bring support for 'more architectures' though it is unclear whether we are talking previous-generation Hopper or next-generation Rubin. Traditional CUDA Tile will coexist with the proven SIMT model, as not all workloads use tensor math extensively, though the vector of industry development is more or less clear, so Nvidia's focus will follow it. Nvidia's CUDA Tile IR provides abstraction that enables architectural stability needed for future generations of tensor-focused hardware, while cuTile Python (and similar languages), as well as enhanced tools, offer practical paths for developers to transition from SIMT-heavy workflows. Combined with expanded partitioning features, math-library optimizations, and improved debugging tools, CUDA 13.1 marks a major milestone in Nvidia's long-term strategy: abstracting away hardware complexity and enabling seamless performance scalability across each GPU generation. Anton Shilov is a contributing writer at Tom's Hardware. Tom's Hardware is part of Future US Inc, an international media group and leading digital publisher.
When you purchase through links on our site, we may earn an affiliate commission. China's largest domestic memory maker ChangXin Memory Technologies (CXMT), is preparing for a major IPO in Shanghai, aiming to raise roughly $4.2 billion USD to expand production and fund next-generation DRAM development. The company nearly doubled its revenue year-over-year in 2025 and expects to swing back into profitability, largely thanks to a rebound in DRAM pricing. That rebound, in turn, is being driven by an unusually strong mix of demand from AI infrastructure, cloud providers, and device manufacturers, all competing for a finite supply of memory chips. By production volume, it's now the world's fourth-largest DRAM manufacturer, supplying memory for everything from smartphones and PCs to servers used by major Chinese tech firms. The company's IPO pitch is straightforward: expand wafer capacity, modernize fabrication lines, and invest in future DRAM technologies. In theory, that should be good news for the global memory market. To some extent, that's true; if CXMT can satisfy a larger share of China's domestic demand, that potentially reduces pressure on the rest of the market. Every server or laptop built with locally sourced memory is one less unit competing for supply from market leaders Samsung, SK Hynix, and Micron. The money CXMT is raising now won't translate into meaningful global supply increases overnight. At the same time, demand isn't standing still. AI workloads continue to soak up enormous amounts of memory, and not just high-end HBM but also conventional DRAM for servers, storage systems, and supporting infrastructure. Large customers are increasingly locking in long-term supply contracts, which reduces the amount of memory that ever reaches the open market. That means CXMT's expansion may help stabilize things in the medium term, but it's unlikely to provide immediate relief for PC builders or consumers wondering why DDR5 prices are still elevated. It's also worth noting what CXMT is not doing: the company isn't racing to flood the market with ultra-cheap consumer RAM. Like every other major memory manufacturer, it's prioritizing higher-margin products and long-term customers. In fact, it's possible that CXMT's expansion could make things worse in the near term; after all, as companies like CXMT ramp up, they compete for the same fabrication equipment, materials, and engineering talent as Samsung, Micron, and SK hynix. Get Tom's Hardware's best news and in-depth reviews, straight to your inbox. Another wrinkle is the question of how quickly CXMT can realistically advance its manufacturing technology. South Korean prosecutors have indicted multiple former Samsung employees over allegations that proprietary DRAM process technology was leaked to CXMT — claims Samsung has said are tied to CXMT's recent progress at advanced nodes like 10nm. The situation highlights how difficult and resource-intensive cutting-edge memory development really is. Whether through legitimate R&D or contested technology transfer, moving the needle on modern DRAM production is slow, expensive, and heavily constrained, which means even aggressive expansion plans don't guarantee rapid gains in usable supply. If CXMT succeeds in scaling production efficiently, it could eventually help absorb some of the demand growth that's currently pushing prices upward. It also adds another serious player to a market that's been dominated by three companies for years, which is generally healthy for competition. It's better understood as part of a slow rebalancing of the memory industry as it adapts to a world where AI, cloud computing, and high-density systems are the norm. Whether that translates into cheaper RAM on store shelves depends on how quickly global production can catch up to an industry that's suddenly very, very hungry for memory. Follow Tom's Hardware on Google News, or add us as a preferred source, to get our latest news, analysis, & reviews in your feeds. Zak is a freelance contributor to Tom's Hardware with decades of PC benchmarking experience who has also written for HotHardware and The Tech Report. A modern-day Renaissance man, he may not be an expert on anything, but he knows just a little about nearly everything. Tom's Hardware is part of Future US Inc, an international media group and leading digital publisher.
Roughly 1,000 light-years away from Earth, a gigantic disk of gas and dust is swirling around a young star and giving rise to new planets. While it was first identified in 2016, astronomers have now used NASA's Hubble Space Telescope to capture the first image of this planetary nursery in visible light. Strangely, these extended filaments are concentrated on just one side of the disk. When viewed edge-on, the planet-forming disk resembles a sandwich, with a dark central lane flanked by white top and bottom layers of gas and dust. “The level of detail we're seeing is rare in protoplanetary disk imaging, and these new Hubble images show that planet nurseries can be much more active and chaotic than we expected,” Kristina Monsch, study lead author and a postdoctoral researcher at the Center for Astrophysics (CfA), a collaboration between Stanford University and the Smithsonian, said in a NASA statement. All planets form from disks of gas and dust encircling young stars. Astronomers have long believed that these protoplanetary disks were relatively orderly, serene environments where planets gradually coalesce over millions of years. Recent studies have challenged that assumption, pointing to greater complexity and diversity among these systems. “We were stunned to see how asymmetric this disk is,” co-author Joshua Bennett Lovell, also an astronomer at the CfA, said in the statement. “Hubble has given us a front row seat to the chaotic processes that are shaping disks as they build new planets—processes that we don't yet fully understand but can now study in a whole new way.” As such, Dracula's Chivito is basically a scaled-up model of what our solar system looked like 4.6 billion years ago. “In theory, [Dracula's Chivito] could host a vast planetary system,” Monsch said. “While planet formation may differ in such massive environments, the underlying processes are likely similar. Right now, we have more questions than answers, but these new images are a starting point for understanding how planets form over time and in different environments.” Dracula's Chivito is therefore a natural laboratory for studying planet formation, says Monsch. Hubble and other space telescopes, such as NASA's James Webb, will continue observing this unique disk to uncover what's shaping its bizarre structure. The space agency requested plans for a UAS detection system at its primary rocket launch site. “One way or another, we're going to make sure Johnson Space Center gets its historic spacecraft right where it belongs.” The National Center for Atmospheric Research not only investigates weather on Earth, but also in space.
The Washington State Department of Financial Institutions had ordered Coinme to stop transmitting money for customers, alleging the startup improperly claimed as its own income more than $8 million owed to consumers from unredeemed crypto vouchers. Coinme said the order was stayed after it provided detailed financial records and operational information to regulators that clarified key details about its business practices. As a result, the company said, it will be able to “continue serving customers in Washington State while addressing any remaining concerns.” The state agency had been seeking to revoke Coinme's money transmitter license, impose a $300,000 fine, and ban CEO Neil Bergquist from the industry for 10 years. The agreement, laid out in a Dec. 23 consent order, requires Coinme to segregate Washington customer assets into dedicated accounts within 14 days, and move cash or cash equivalents tied to outstanding Washington kiosk transactions into a segregated account within 30 days. “Our commitment to customer protection and regulatory compliance remains our top priority,” Bergquist said in a statement, noting that Coinme has had a collaborative relationship with the agency dating back to the company's founding in 2014. Coinme operates what it calls the nation's largest cash-to-crypto network through partnerships with MoneyGram and Coinstar. Crypto ATM startup Coinme hit with cease-and-desist order in Washington state Crypto startup Coinme alleges Coinstar misused trade secrets for competing product at kiosks As Trump targets state AI laws, a new Seattle startup sees opportunity