|
1825 Monetary Lane Suite #104 Carrollton, TX
Do a presentation at NTLUG.
What is the Linux Installation Project?
Real companies using Linux!
Not just for business anymore.
Providing ready to run platforms on Linux
|
Show Descriptions... (Show All)
(Two Column)

- [$] Bootc for workstation use
The bootc project allows users tocreate a bootable Linux system image using the container tooling that manydevelopers are already familiar with. It is an evolution of OSTree(now called libostree), which is used to create FedoraSilverblue and other image-based distributions. While creatingcustom images is still a job for experts, the container technologysimplifies delivering heavily customized images to non-technicalusers.
- Security updates for Friday
Security updates have been issued by AlmaLinux (bind, bind9.16, libsoup, mariadb:10.5, and sssd), Debian (chromium, keystone, and swift), Fedora (apptainer, buildah, chromium, fcitx5, fcitx5-anthy, fcitx5-chewing, fcitx5-chinese-addons, fcitx5-configtool, fcitx5-hangul, fcitx5-kkc, fcitx5-libthai, fcitx5-m17n, fcitx5-qt, fcitx5-rime, fcitx5-sayura, fcitx5-skk, fcitx5-table-extra, fcitx5-unikey, fcitx5-zhuyin, GeographicLib, libime, mbedtls, mingw-poppler, mupen64plus, python-starlette, webkitgtk, and xen), Mageia (dcmtk, java-1.8.0-openjdk, java-11-openjdk, java-17-openjdk, java-latest-openjdk, libvpx, and sqlite3), Oracle (bind, bind9.16, kernel, libsoup, libsoup3, osbuild-composer, qt6-qtsvg, sssd, and valkey), Red Hat (kernel and kernel-rt), SUSE (bind, gpg2, ImageMagick, python-Django, and runc), and Ubuntu (linux-azure, linux-azure-4.15, linux-fips, linux-aws-fips, inux-gcp-fips, linux-gcp, linux-gcp-6.8, linux-gke, linux-intel-iot-realtime, linux-realtime, linux-raspi-5.4, and linux-realtime, linux-realtime-6.8).
- Mastodon 4.5 released
Version4.5 of the Mastodondecentralized social-media platform has been released. Notablefeatures in this release include quoteposts, native emoji support, as well as enhanced moderation andblocking features for server administrators. The project also has a postdetailing new features in 4.5 for developers of clients and othersoftware that interacts with Mastodon.
- Freedesktop.org now hosts the Filesystem Hierarchy Standard
The future of the Filesystem Hierarchy Standard (FHS) has been under discussion for some time; now,Neal Gompa has announcedthat the FHS is "hosted and stewarded" by Freedesktop.org. For those who are unaware, the Filesystem Hierarchy Standard (FHS) is the definition for POSIX operating systems to organize system and user data. It is broadly adopted by Linux, BSD, and other operating systems that follow POSIX-like conventions. See thispage for the specification's new home.
- [$] Toward fast, containerized, user-space filesystems
Filesystems are complex and performance-sensitive beasts. They can alsopresent security concerns. Microkernel-based systems have long pushedfilesystems into separate processes in order to contain any vulnerabilitiesthat may be found there. Linux can do the same with the Filesystem inUserspace (FUSE) subsystem, but using FUSE brings a significantperformance penalty. Darrick Wong is working on ways to eliminate thatpenalty, and he has a massive patchset showing how ext4 filesystems can be safely implemented in user space byunprivileged processes with good performance. This work has the potentialto radically change how filesystems are managed on Linux systems.
- Security updates for Thursday
Security updates have been issued by Debian (unbound), Fedora (deepin-qt5integration, deepin-qt5platform-plugins, dtkcore, dtkgui, dtklog, dtkwidget, fcitx-qt5, fcitx5-qt, fontforge, gammaray, golang-github-openprinting-ipp-usb, kddockwidgets, keepassxc, kf5-akonadi-server, kf5-frameworkintegration, kf5-kwayland, plasma-integration, python-qt5, qadwaitadecorations, qt5, qt5-qt3d, qt5-qtbase, qt5-qtcharts, qt5-qtconnectivity, qt5-qtdatavis3d, qt5-qtdeclarative, qt5-qtdoc, qt5-qtgamepad, qt5-qtgraphicaleffects, qt5-qtimageformats, qt5-qtlocation, qt5-qtmultimedia, qt5-qtnetworkauth, qt5-qtquickcontrols, qt5-qtquickcontrols2, qt5-qtremoteobjects, qt5-qtscript, qt5-qtscxml, qt5-qtsensors, qt5-qtserialbus, qt5-qtserialport, qt5-qtspeech, qt5-qtsvg, qt5-qttools, qt5-qttranslations, qt5-qtvirtualkeyboard, qt5-qtwayland, qt5-qtwebchannel, qt5-qtwebengine, qt5-qtwebkit, qt5-qtwebsockets, qt5-qtwebview, qt5-qtx11extras, qt5-qtxmlpatterns, qt5ct, and xorg-x11-server), Mageia (binutils, gstreamer1.0-plugins-bad, libsoup, libsoup3, mediawiki, net-tools, and tigervnc, x11-server, and x11-server-xwayland), Red Hat (tigervnc), SUSE (aws-efs-utils, fetchmail, flake-pilot, ImageMagick, java-1_8_0-ibm, java-1_8_0-openjdk, kernel-devel, kubecolor, OpenSMTPD, sccache, tiff, and zellij), and Ubuntu (linux, linux-aws, linux-aws-6.14, linux-gcp, linux-gcp-6.14, linux-oem-6.14, linux-oracle, linux-oracle-6.14, linux-raspi, linux-realtime, linux, linux-aws, linux-gkeop, linux-hwe-6.8, linux-ibm, linux-ibm-6.8, linux-lowlatency, linux-lowlatency-hwe-6.8, linux-nvidia, linux-nvidia-lowlatency, linux, linux-aws, linux-kvm, linux-lts-xenial, linux-oracle-6.8, linux-realtime-6.14, poppler, python-django, and various linux-* packages).
- [$] LWN.net Weekly Edition for November 6, 2025
Inside this week's LWN.net Weekly Edition: Front: Python thread safety; Namespace reference counting; Merigraf; Speeding up short reads; Julia 1.12; systemd security. Briefs: CHERIoT 1.0; Chromium XSLT; Arm KASLR; Bazzite; Devuan 6.0; Incus 6.18; LXQt 2.3.0; Rust 1.91.0; Quotes; ... Announcements: Newsletters, conferences, security updates, patches, and more.
- Removing XSLT from Chromium
Mason Freed and Dominik Röttsches have published a documentwith a timeline and plans for removing Extensible Stylesheet LanguageTransformations (XSLT) from the Chromium project and Chromebrowser: Chromium has officially deprecated XSLT, including the XSLTProcessorJavaScript API and the XML stylesheet processing instruction. Weintend to remove support from version 155 (November 17, 2026). TheFirefox and WebKit projects have also indicated plans to remove XSLTfrom their browser engines. This document provides some history andcontext, explains how we are removing XSLT to make Chrome safer, andprovides a path for migrating before these features are removed fromthe browser. LWN covered the WebHypertext Application Technology Working Group (WHATWG) discussionabout XSLT in August.
- LXQt 2.3.0 released
Version2.3.0 of the Lightweight Qt Desktop Environment (LXQt) has beenreleased. The highlight of this release is continued improvement inWayland support across LXQt components. Rather than offering its owncompositor, the LXQt project takes a modular approach and works withseveral Wayland compositors, such as KWin, labwc, and niri.
- [$] A security model for systemd
Linux has many security features and tools that have evolved overthe years to address threats as they emerge and security gaps as theyare discovered. Linux security is all, as Lennart Poettering observed at the All Systems Go! conference heldin Berlin, somewhat random and not a "clean"design. To many observers, that may also appear to be the case forsystemd; however, Poettering said that he does have a vision for howall of the security-related pieces of systemd are meant to fittogether. He wanted to use his talk to explain "how the individualsecurity-related parts of systemd actually fit together and why theyexist in the first place".

- Ryzen AI Software 1.6.1 Advertises Linux Support
Ryzen AI Software as AMD's collection of tools and libraries for AI inferencing on AMD Ryzen AI class PCs has Linux support with its newest point release. Though this "early access" Linux support is restricted to registered AMD customers...

- Hilarious Unused Audio From 2003 Baseball Game Rediscovered by Video Game History Foundation
After popular arcade games like Mortal Kombat and Spy Hunter, Midway Games jumped into the home console market, and in 2003 launched their baseball game franchise "MLB Slugfest" for Xbox, PS2, and GameCube. But at times it was almost a parody of baseball, including announcers filling the long hours of airtime with bizarre, rambling conversations. ("I read today that kitchen utensils are gonna hurt more people tonight than lifting heavy objects during the day...") Now former Midway Games producer Mark Flitman has revealed the even weirder conversations rejected by Major League Baseball. ("Ah, baseball on a sunny afternoon. Is there anything better? We've been talking about breaking pop bottles with rocks. I guess that is...") The nonprofit Video Game History Foundation published the text in their digital archive — and shared 79 seconds of sound clips that were actually recorded but never used in the final game. ("Enjoying some smoked whale meat up here in the booth today...") Their BlueSky post with the audio drew over 5,500 likes and 2,400 reposts, with one commenter wondering if the bizarre (and unapproved) conversations were "part of the tactic where you include overtly inappropriate content to make the stuff you actually want to keep seem more appropriate." But the Foundation's library director thinks the voice actors were just going wild. "We talked with Mark on our podcast and it sounds like they just did a lot of improv and got carried away." He added later that the game's producer "would give them prompts and they'd run with it. The voice actors (Kevin Matthews and Tim Kitzrow) have backgrounds in sports radio and comedy, so they came up with wild nonsense like this." The gaming site Aftermath notes the Foundation also has an archive page for all the other sound files on the CD. Maybe it's the ultimate tribute to the craziness that was MLB Slugfest. Years ago some fans of the game shared their memories on Reddit..."The first time my friend tried to bean me and my hitter caught the ball was so hype, we were freaking out. Every game quickly evolved into trying to get our hitters to charge the mound.""I just remembered you could also kick the shit out of the fielder near your base if he got too close. Man that game was awesome.""You could do jump kicks into the catcher like Richie from The Benchwarmers.""Every time someone got on base we would run the ball over to them and beat their asses for 30 seconds. Good times."Six years after the launch of the franchise, Midway Games declared bankruptcy.
Read more of this story at Slashdot.
- Did ChatGPT Conversations Leak... Into Google Search Console Results?
"For months, extremely personal and sensitive ChatGPT conversations have been leaking into an unexpected destination," reports Ars Technica: the search-traffic tool for webmasters , Google Search Console. Though it normally shows the short phrases or keywords typed into Google which led someone to their site, "starting this September, odd queries, sometimes more than 300 characters long, could also be found" in Google Search Console. And the chats "appeared to be from unwitting people prompting a chatbot to help solve relationship or business problems, who likely expected those conversations would remain private." Jason Packer, owner of analytics consulting firm Quantable, flagged the issue in a detailed blog post last month, telling Ars Technica he'd seen 200 odd queries — including "some pretty crazy ones." (Web optimization consultant Slobodan ManiÄ helped Packer investigate...) Packer points out "nobody clicked share" or were given an option to prevent their chats from being exposed.Packer suspected that these queries were connected to reporting from The Information in August that cited sources claiming OpenAI was scraping Google search results to power ChatGPT responses. Sources claimed that OpenAI was leaning on Google to answer prompts to ChatGPT seeking information about current events, like news or sports... "Did OpenAI go so fast that they didn't consider the privacy implications of this, or did they just not care?" Packer posited in his blog... Clearly some of those searches relied on Google, Packer's blog said, mistakenly sending to GSC "whatever" the user says in the prompt box... This means "that OpenAI is sharing any prompt that requires a Google Search with both Google and whoever is doing their scraping," Packer alleged. "And then also with whoever's site shows up in the search results! Yikes." To Packer, it appeared that "ALL ChatGPT prompts" that used Google Search risked being leaked during the past two months. OpenAI claimed only a small number of queries were leaked but declined to provide a more precise estimate. So, it remains unclear how many of the 700 million people who use ChatGPT each week had prompts routed to Google Search Console. "Perhaps most troubling to some users — whose identities are not linked in chats unless their prompts perhaps share identifying information — there does not seem to be any way to remove the leaked chats from Google Search Console.."
Read more of this story at Slashdot.
- 'Breaking Bad' Creator Hates AI, Promises New Show 'Pluribus' Was 'Made By Humans'
The new series from Breaking Bad creator Vince Gilligan, Pluribus, was emphatically made by humans, not AI, reports TechCrunch:If you watched all the way to the end of the new Apple TV show "Pluribus," you may have noticed an unusual disclaimer in the credits: "This show was made by humans." That terse message — placed right below a note that "animal wranglers were on set to ensure animal safety" — could potentially provide a model for other filmmakers seeking to highlight that their work was made without the use of generative AI. In fact, yesterday the former X-Files writer told Variety "I hate AI. AI is the world's most expensive and energy-intensive plagiarism machine...."He goes on, about how AI-generated content is "like a cow chewing its cud — an endlessly regurgitated loop of nonsense," and how the U.S. will fail to regulate the technology because of an arms race with China. He works himself up until he's laughing again, proclaiming: "Thank you, Silicon Valley! Yet again, you've fucked up the world." He also says "there's a very high possibility that this is all a bunch of horseshit," according to the article. "It's basically a bunch of centibillionaires whose greatest life goal is to become the world's first trillionaires. I think they're selling a bag of vapor." And earlier this week he told Polygon that he hasn't used ChatGPT "because, as of yet, no one has held a shotgun to my head and made me do it." (Adding "I will never use it.") Time magazine called Thursday's two-episode premiere "bonkers." Though ironically, that premiere hit its own dystopian glitch. "After months of buildup and an omnipresent advertising campaign, Apple's much-anticipated new show Pluribus made its debut..." reports Macworld. "And the service promptly suffered a major outage across the U.S. and Canada."As reported by Bloomberg and others, users started to report that the service had crashed at around 10:30 p.m. ET, shortly after Apple made the first two episodes of the show available to stream. There were almost 13,000 reports on Downdetector before Apple acknowledged the problem on its System Status page. Reports say the outage was brief, lasting less than an hour... [T]here remains a Resolved Outage note on Apple TV (simply saying "Some users were affected; users experienced a problem with Apple TV" between 10:29 and 11.38 p.m.), as well as on Apple Music and Apple Arcade, which also went down at the same time. Social media reports indicated that the outage was widespread.
Read more of this story at Slashdot.
- New Firefox Mascot 'Kit' Unveiled On New Web Page
"The Firefox brand is getting a refresh and you get the first look," says a new web page at Firefox.com. "Kit's our new mascot and your new companion through an internet that's private, open and actually yours." Slashdot reader BrianFagioli believes the new mascot "is meant to communicate that message in a warmer, more relatable way." And Firefox is already selling shirts with Kit over the pocket (as well as stickers)...
Read more of this story at Slashdot.
- Common Crawl Criticized for 'Quietly Funneling Paywalled Articles to AI Developers'
For more than a decade, the nonprofit Common Crawl "has been scraping billions of webpages to build a massive archive of the internet," notes the Atlantic, making it freely available for research."In recent years, however, this archive has been put to a controversial purpose: AI companies including OpenAI, Google, Anthropic, Nvidia, Meta, and Amazon have used it to train large language models. "In the process, my reporting has found, Common Crawl has opened a back door for AI companies to train their models with paywalled articles from major news websites. And the foundation appears to be lying to publishers about this — as well as masking the actual contents of its archives..."Common Crawl's website states that it scrapes the internet for "freely available content" without "going behind any 'paywalls.'" Yet the organization has taken articles from major news websites that people normally have to pay for — allowing AI companies to train their LLMs on high-quality journalism for free. Meanwhile, Common Crawl's executive director, Rich Skrenta, has publicly made the case that AI models should be able to access anything on the internet. "The robots are people too," he told me, and should therefore be allowed to "read the books" for free. Multiple news publishers have requested that Common Crawl remove their articles to prevent exactly this use. Common Crawl says it complies with these requests. But my research shows that it does not. I've discovered that pages downloaded by Common Crawl have appeared in the training data of thousands of AI models. As Stefan Baack, a researcher formerly at Mozilla, has written, "Generative AI in its current form would probably not be possible without Common Crawl." In 2020, OpenAI used Common Crawl's archives to train GPT-3. OpenAI claimed that the program could generate "news articles which human evaluators have difficulty distinguishing from articles written by humans," and in 2022, an iteration on that model, GPT-3.5, became the basis for ChatGPT, kicking off the ongoing generative-AI boom. Many different AI companies are now using publishers' articles to train models that summarize and paraphrase the news, and are deploying those models in ways that steal readers from writers and publishers. Common Crawl maintains that it is doing nothing wrong. I spoke with Skrenta twice while reporting this story. During the second conversation, I asked him about the foundation archiving news articles even after publishers have asked it to stop. Skrenta told me that these publishers are making a mistake by excluding themselves from "Search 2.0" — referring to the generative-AI products now widely being used to find information online — and said that, anyway, it is the publishers that made their work available in the first place. "You shouldn't have put your content on the internet if you didn't want it to be on the internet," he said. Common Crawl doesn't log in to the websites it scrapes, but its scraper is immune to some of the paywall mechanisms used by news publishers. For example, on many news websites, you can briefly see the full text of any article before your web browser executes the paywall code that checks whether you're a subscriber and hides the content if you're not. Common Crawl's scraper never executes that code, so it gets the full articles. Thus, by my estimate, the foundation's archives contain millions of articles from news organizations around the world, including The Economist, the Los Angeles Times, The Wall Street Journal, The New York Times, The New Yorker, Harper's, and The Atlantic.... A search for nytimes.com in any crawl from 2013 through 2022 shows a "no captures" result, when in fact there are articles from NYTimes.com in most of these crawls. "In the past year, Common Crawl's CCBot has become the scraper most widely blocked by the top 1,000 websites," the article points out...
Read more of this story at Slashdot.
- Scientists Edit Gene in 15 Patients That May Permanently Reduce High Cholesterol
A CRISPR-based drug given to study participants by infusion is raising hopes for a much easier way to lower cholesterol, reports CNN:With a snip of a gene, doctors may one day permanently lower dangerously high cholesterol, possibly removing the need for medication, according to a new pilot study published Saturday in the New England Journal of Medicine. The study was extremely small — only 15 patients with severe disease — and was meant to test the safety of a new medication delivered by CRISPR-Cas9, a biological sort of scissor which cuts a targeted gene to modify or turn it on or off. Preliminary results, however, showed nearly a 50% reduction in low-density lipoprotein, or LDL, the "bad" cholesterol which plays a major role in heart disease — the No.1 killer of adults in the United States and worldwide. The study, which will be presented Saturday at the American Heart Association Scientific Sessions in New Orleans, also found an average 55% reduction in triglycerides, a different type of fat in the blood that is also linked to an increased risk of cardiovascular disease. "We hope this is a permanent solution, where younger people with severe disease can undergo a 'one and done' gene therapy and have reduced LDL and triglycerides for the rest of their lives," said senior study author Dr. Steven Nissen, chief academic officer of the Sydell and Arnold Miller Family Heart, Vascular & Thoracic Institute at Cleveland Clinic in Ohio.... Today, cardiologists want people with existing heart disease or those born with a predisposition for hard-to-control cholesterol to lower their LDL well below 100, which is the average in the US, said Dr. Pradeep Natarajan, director of preventive cardiology at Massachusetts General Hospital and associate professor of medicine at Harvard Medical School in Boston... People with a nonfunctioning ANGPTL3 gene — which Natarajan says applies to about 1 in 250 people in the US — have lifelong levels of low LDL cholesterol and triglycerides without any apparent negative consequences. They also have exceedingly low or no risk for cardiovascular disease. "It's a naturally occurring mutation that's protective against cardiovascular disease," said Nissen, who holds the Lewis and Patricia Dickey Chair in Cardiovascular Medicine at Cleveland Clinic. "And now that CRISPR is here, we have the ability to change other people's genes so they too can have this protection." "Phase 2 clinical trials will begin soon, quickly followed by Phase 3 trials, which are designed to show the effect of the drug on a larger population, Nissen said." And CNN quotes Nissen as saying "We hope to do all this by the end of next year. We're moving very fast because this is a huge unmet medical need — millions of people have these disorders and many of them are not on treatment or have stopped treatment for whatever reason."
Read more of this story at Slashdot.
- Bank of America Faces Lawsuit Over Alleged Unpaid Time for Windows Bootup, Logins, and Security Token Requests
A former Business Analyst reportedly filed a class action lawsuit claiming that for years, hundreds of remote employees at Bank of America first had to boot up complex computer systems before their paid work began, reports Human Resources Director magazine:Tava Martin, who worked both remotely and at the company's Jacksonville facility, says the financial institution required her and fellow hourly workers to log into multiple security systems, download spreadsheets, and connect to virtual private networks — all before the clock started ticking on their workday. The process wasn't quick. According to the filing in the United States District Court for the Western District of North Carolina, employees needed 15 to 30 minutes each morning just to get their systems running. When technical problems occurred, it took even longer... Workers turned on their computers, waited for Windows to load, grabbed their cell phones to request a security token for the company's VPN, waited for that token to arrive, logged into the network, opened required web applications with separate passwords, and downloaded the Excel files they needed for the day. Only then could they start taking calls from business customers about regulatory reporting requirements... The unpaid work didn't stop at startup. During unpaid lunch breaks, many systems would automatically disconnect or otherwise lose connection, forcing employees to repeat portions of the login process — approximately three to five minutes of uncompensated time on most days, sometimes longer when a complete reboot was required. After shifts ended, workers had to log out of all programs and shut down their computers securely, adding another two to three minutes. Thanks to Slashdot reader Joe_Dragon for sharing the article.
Read more of this story at Slashdot.
- Chan Zuckerberg Initiative Shifts Bulk of Philanthropy, 'Going All In on AI-Powered Biology'
The Associated Press reports that "For the past decade, Dr. Priscilla Chan and her husband Mark Zuckerberg have focused part of their philanthropy on a lofty goal — 'to cure, prevent or manage all disease' — if not in their lifetime, then in their children's." During that decade they also funded other initiatives (including underprivileged schools and immigration reform), according to the article. But there's a change coming:Now, the billionaire couple is shifting the bulk of their philanthropic resources to Biohub, the pair's science organization, and focusing on using artificial intelligence to accelerate scientific discovery. The idea is to develop virtual, AI-based cell models to understand how they work in the human body, study inflammation and use AI to "harness the immune system" for disease detection, prevention and treatment. "I feel like the science work that we've done, the Biohub model in particular, has been the most impactful thing that we have done. So we want to really double down on that. Biohub is going to be the main focus of our philanthropy going forward," Zuckerberg said Wednesday evening at an event at the Biohub Imaging Institute in Redwood City, California.... Chan and Zuckerberg have pledged 99% of their lifetime wealth — from shares of Meta Platforms, where Zuckerberg is CEO — toward these efforts... On Thursday, Chan and Zuckerberg also announced that Biohub has hired the team at EvolutionaryScale, an AI research lab that has created large-scale AI systems for the life sciences... Biohub's ambition for the next years and decades is to create virtual cell systems that would not have been possible without recent advances in AI. Similar to how large language models learn from vast databases of digital books, online writings and other media, its researchers and scientists are working toward building virtual systems that serve as digital representations of human physiology on all levels, such as molecular, cellular or genome. As it is open source — free and publicly available — scientists can then conduct virtual experiments on a scale not possible in physical laboratories. "We will continue the model we've pioneered of bringing together scientists and engineers in our own state-of-the-art labs to build tools that advance the field," according to Thursday's blog post. "We'll then use those tools to generate new data sets for training new biological AI models to create virtual cells and immune systems and engineer our cells to detect and treat disease.... "We have also established the first large-scale GPU cluster for biological research, as well as the largest datasets around human cell types. This collection of resources does not exist anywhere else."
Read more of this story at Slashdot.
- World's Largest Cargo Sailboat Completes Historic First Atlantic Crossing
Long-time Slashdot reader AmiMoJo shared this report from Marine Insight:The world's largest cargo sailboat, Neoliner Origin, completed its first transatlantic voyage on 30 October despite damage to one of its sails during the journey. The 136-metre-long vessel had to rely partly on its auxiliary motor and its remaining sail after the aft sail was damaged in a storm shortly after departure... Neoline, the company behind the project, said the damage reduced the vessel's ability to perform fully on wind power... The Neoliner Origin is designed to reduce greenhouse gas emissions by 80 to 90 percent compared to conventional diesel-powered cargo ships. According to the United Nations Conference on Trade and Development (UNCTAD), global shipping produces about 3 percent of worldwide greenhouse gas emissions... The ship can carry up to 5,300 tonnes of cargo, including containers, vehicles, machinery, and specialised goods. It arrived in Baltimore carrying Renault vehicles, French liqueurs, machinery, and other products. The Neoliner Origin is scheduled to make monthly voyages between Europe and North America, maintaining a commercial cruising speed of around 11 knots.
Read more of this story at Slashdot.
- Bombshell Report Exposes How Meta Relied On Scam Ad Profits To Fund AI
"Internal documents have revealed that Meta has projected it earns billions from ignoring scam ads that its platforms then targeted to users most likely to click on them," writes Ars Technica, citing a lengthy report from Reuters. Reuters reports that Meta "for at least three years failed to identify and stop an avalanche of ads that exposed Facebook, Instagram and WhatsApp's billions of users to fraudulent e-commerce and investment schemes, illegal online casinos, and the sale of banned medical products..."On average, one December 2024 document notes, the company shows its platforms' users an estimated 15 billion "higher risk" scam advertisements — those that show clear signs of being fraudulent — every day. Meta earns about $7 billion in annualized revenue from this category of scam ads each year, another late 2024 document states. Much of the fraud came from marketers acting suspiciously enough to be flagged by Meta's internal warning systems. But the company only bans advertisers if its automated systems predict the marketers are at least 95% certain to be committing fraud, the documents show. If the company is less certain — but still believes the advertiser is a likely scammer — Meta charges higher ad rates as a penalty, according to the documents. The idea is to dissuade suspect advertisers from placing ads. The documents further note that users who click on scam ads are likely to see more of them because of Meta's ad-personalization system, which tries to deliver ads based on a user's interests... The documents indicate that Meta's own research suggests its products have become a pillar of the global fraud economy. A May 2025 presentation by its safety staff estimated that the company's platforms were involved in a third of all successful scams in the U.S. Meta also acknowledged in other internal documents that some of its main competitors were doing a better job at weeding out fraud on their platforms... The documents note that Meta plans to try to cut the share of Facebook and Instagram revenue derived from scam ads. In the meantime, Meta has internally acknowledged that regulatory fines for scam ads are certain, and anticipates penalties of up to $1 billion, according to one internal document. But those fines would be much smaller than Meta's revenue from scam ads, a separate document from November 2024 states. Every six months, Meta earns $3.5 billion from just the portion of scam ads that "present higher legal risk," the document says, such as those falsely claiming to represent a consumer brand or public figure or demonstrating other signs of deceit. That figure almost certainly exceeds "the cost of any regulatory settlement involving scam ads...." A planning document for the first half of 2023 notes that everyone who worked on the team handling advertiser concerns about brand-rights issues had been laid off. The company was also devoting resources so heavily to virtual reality and AI that safety staffers were ordered to restrict their use of Meta's computing resources. They were instructed merely to "keep the lights on...." Meta also was ignoring the vast majority of user reports of scams, a document from 2023 indicates. By that year, safety staffers estimated that Facebook and Instagram users each week were filing about 100,000 valid reports of fraudsters messaging them, the document says. But Meta ignored or incorrectly rejected 96% of them. Meta's safety staff resolved to do better. In the future, the company hoped to dismiss no more than 75% of valid scam reports, according to another 2023 document. A small advertiser would have to get flagged for promoting financial fraud at least eight times before Meta blocked it, a 2024 document states. Some bigger spenders — known as "High Value Accounts" — could accrue more than 500 strikes without Meta shutting them down, other documents say. Thanks to long-time Slashdot reader schwit1 for sharing the article.
Read more of this story at Slashdot.

- Here's one way to cut support ticket volume… send them to another company entirely
Misdirection is the new resolution at major video game house The CEO of the company behind note-taking app Obsidian says the well-known video game house of the same name has sent one of its customer queries to his own team – claiming that "off-the-shelf AI support software" is why the gaming firm gave a user the wrong email address.…
- Microsoft's lack of quality control is out of control
At one point, Microsoft's QC was legendary. Now, it's the wrong kind of legend OPINION I have a habit of ironically referring to Microsoft's various self-induced whoopsies as examples of the company's "legendary approach to quality control." While the robustness of Windows NT in decades past might qualify as "legendary", anybody who has had to use the company's wares in recent years might quibble with the word "quality."…
- Meta can't afford its $600B love letter to Trump
The Zuck better hope his finance bros have deep pockets and a whole lotta patience to pull this off Meta on Friday floated plans to invest $600 billion in US infrastructure and jobs by 2028 as part of a massive datacenter expansion.…
- ChatGPT, Claude, and Grok make very squishy jury members
All three acquitted a teen in a mock trial based on a case where a judge ruled guilty Law students at the University of North Carolina at Chapel Hill School of Law last month held a mock trial to see how AI models administer justice.…
- Previously unknown Landfall spyware used in 0-day attacks on Samsung phones
'Precision espionage campaign' began months before the flaw was fixed A previously unknown Android spyware family called LANDFALL exploited a zero-day in Samsung Galaxy devices for nearly a year, installing surveillance code capable of recording calls, tracking locations, and harvesting photos and logs before Samsung finally patched it in April.…
- Blackwell a no-sell in China as trade deal fails to materialize
Xi and Trump haven't gotten to discuss the chips, though they were supposed to Nvidia's latest generation of Blackwell accelerators won't be available in China anytime soon, according to CEO Jensen Huang, who said there were no "active discussions" about selling the coveted chips to the Middle Kingdom.…
- Bell bottom-era tape unearthed, could contain lost piece of Unix history
It might have the first-ever version of UNIX written in C A tape-based piece of unique Unix history may have been lying quietly in storage at the University of Utah for 50+ years. The question is whether researchers will be able to take this piece of middle-aged media and rewind it back to the 1970s to get the data off.…

- Security: Why Linux Is Better Than Windows Or Mac OS
Linux is a free and open source operating system that was released in 1991 developed and released by Linus Torvalds. Since its release it has reached a user base that is greatly widespread worldwide. Linux users swear by the reliability and freedom that this operating system offers, especially when compared to its counterparts, windows and [0]
- Essential Software That Are Not Available On Linux OS
An operating system is essentially the most important component in a computer. It manages the different hardware and software components of a computer in the most effective way. There are different types of operating system and everything comes with their own set of programs and software. You cannot expect a Linux program to have all [0]
- Things You Never Knew About Your Operating System
The advent of computers has brought about a revolution in our daily life. From computers that were so huge to fit in a room, we have come a very long way to desktops and even palmtops. These machines have become our virtual lockers, and a life without these network machines have become unimaginable. Sending mails, [0]
- How To Fully Optimize Your Operating System
Computers and systems are tricky and complicated. If you lack a thorough knowledge or even basic knowledge of computers, you will often find yourself in a bind. You must understand that something as complicated as a computer requires constant care and constant cleaning up of junk files. Unless you put in the time to configure [0]
- The Top Problems With Major Operating Systems
There is no such system which does not give you any problems. Even if the system and the operating system of your system is easy to understand, there will be some times when certain problems will arise. Most of these problems are easy to handle and easy to get rid of. But you must be [0]
- 8 Benefits Of Linux OS
Linux is a small and a fast-growing operating system. However, we can’t term it as software yet. As discussed in the article about what can a Linux OS do Linux is a kernel. Now, kernels are used for software and programs. These kernels are used by the computer and can be used with various third-party software [0]
- Things Linux OS Can Do That Other OS Cant
What Is Linux OS? Linux, similar to U-bix is an operating system which can be used for various computers, hand held devices, embedded devices, etc. The reason why Linux operated system is preferred by many, is because it is easy to use and re-use. Linux based operating system is technically not an Operating System. Operating [0]
- Packagekit Interview
Packagekit aims to make the management of applications in the Linux and GNU systems. The main objective to remove the pains it takes to create a system. Along with this in an interview, Richard Hughes, the developer of Packagekit said that he aims to make the Linux systems just as powerful as the Windows or [0]
- What’s New in Ubuntu?
What Is Ubuntu? Ubuntu is open source software. It is useful for Linux based computers. The software is marketed by the Canonical Ltd., Ubuntu community. Ubuntu was first released in late October in 2004. The Ubuntu program uses Java, Python, C, C++ and C# programming languages. What Is New? The version 17.04 is now available here [0]
- Ext3 Reiserfs Xfs In Windows With Regards To Colinux
The problem with Windows is that there are various limitations to the computer and there is only so much you can do with it. You can access the Ext3 Reiserfs Xfs by using the coLinux tool. Download the tool from the official site or from the sourceforge site. Edit the connection to “TAP Win32 Adapter [0]

- LXQt 2.3.0 released
LXQt, the other Qt desktop environment, released version 2.3.0. This new version comes roughly six months after 2.2.0, and continues the projects adoption of Wayland. The enhancement of Wayland support has been continued, especially in LXQt Panel, whose Desktop Switcher is now enabled for Labwc, Niri, …. It is also equipped with a backend specifically for Wayfire. In addition, the Custom Command plugin is made more flexible, regardless of Wayland and X11. ↫ LXQt 2.3.0 release announcement The screenshot utility has been improved as well, and lxqt-qdbus has been added to lxqt-wayland-session to make qdbus commands easier to use with all kinds of Wayland compositors.
- WINE gaming in FreeBSD Jails with Bastille
FreeBSD offers a whole bunch of technologies and tools to make gaming on the platform a lot more capable than youd think, and this article by Pertho dives into the details. Running all your games inside a FreeBSD Jail with Wine installed into it is pretty neat. Initially, I thought this was going to be a pretty difficult and require a lot of trial and error but I was surprised at how easy it was to get this all working. I was really happy to get some of my favorite games working in a FreeBSD Jail, and having ZFS snapshots around was a great way to test things in case I needed to backtrack. ↫ Pertho at their blog No, this isnt as easy as gaming on Linux has become, and it certainly requires a ton more work and knowledge than just installing a major Linux distribution and Steam, but for those of us who prefer a more traditional UNIX-like experience, this is a great option.
- Tape containing UNIX v4 found
A unique and very important find at the University of Utah: while cleaning out some storage rooms, the staff at the university discovered a tape containing a copy of UNIX v4 from Bell Labs. At this time, no complete copies are known to exist, and as such, this could be a crucial find for the archaeology of early UNIX. The tape in question will be sent to the Computer History Museum for further handling, where bitsavers.org will conduct the recovery process. I have the equipment. It is a 3M tape so it will probably be fine. It will be digitized on my analog recovery set up and Ill use Len Shusteks readtape program to recover the data. The only issue right now is my workflow isnt a while you wait! thing, so I need to pull all the pieces into one physical location and test everything before I tell Penny its OK to come out. ↫ bitsavers.org Its amazing how we still manage to find such treasures in nooks and crannies all over the world, and with everything looking good so far, it seems well soon be able to fill in more of UNIX early history.
- There is no such thing as a 3.5 inch floppy disk
Wait, what? The term`3.5 inch floppy disc`is in fact a misnomer. Whilst the specification for 5.25 inch floppy discs employs Imperial units, the later specification for the smaller floppy discs employs metric units. The standards for these discs are all of which specify the measurements in metric, and only metric. These standards explicitly give the dimensions as 90.0mm by 94.0mm. Its in clause 6 of all three. ↫ Jonathan de Boyne Pollard Even the applicable standard in the US, ANSI X3.171-1989, specifies the size in metric. We couldve been referring to these things using proper measurements instead of archaic ones based on the size of a monks left testicle at dawn at room temperature in 1375 or whatever nonsense imperial or customary used to be based on. I feel dirty for thinking I had to use inches! for this. If we ever need to talk about these disks on OSNews from here on out, Ill be using proper units of measurement.
- Servo ported to Redox
Redox keeps improving every month, and this past one is certainly a banger. The big news this past month is that Servo, the browser engine written in Rust, has been ported to Redox. Its extremely spartan at the moment, and crashes when a second website is loaded, but its a promising start. It also just makes sense to have the premier Rust browser engine running on the premier Rust operating system. Htop and bottom have been ported to Redox for much improved system monitoring, and theyre joined by a port of GoAccess. The version of Rust has been updated which fixed some issues, and keyboard layout configuration has been greatly improved. Instead of a few hardcoded layouts, they can now be configured dynamically for users of PS/2 keyboards, with USB keyboards receiving this functionality soon as well. Theres more, of course, as well as the usual slew of low-level changes and improvements to drivers, the kernel relibc, and more.
- MacOS 26’s new icons are a step backwards
On the new MacOS 26 (Tahoe), Apple has mandated that all application icons fit into their prescribed squircle. No longer can icons have distinct shapes, nor even any fun frame-breaking accessories. Should an icon be so foolish as to try to have a bit of personality, it will find itself stuffed into a dingy gray icon jail. ↫ Paul Kafasis The downgraded icons listed in this article are just0 Sad. While theres no accounting for tastes, Apples new glassy icons are just plain bad, void of any whimsy, and lacking in artistry. Considering where Apple came from back when it made beautifully crafted icons that set the bar for the entire industry. Almost seems like a metaphor for tech in general.
- A lost IBM PC/AT model? Analyzing a newfound old BIOS
Some people not only have a very particular set of skills, but also a very particular set of interests that happen to align with those skills perfectly. When several unidentified and mysterious IBM PC ROM chips from the 1980s were discovered on eBay, two particular chips dumped contents posed particularly troublesome to identify. In 1985, the FCh model byte could only mean the 5170 (PC/AT), and the even/odd byte interleaving does point at a 16-bit bus. But there are three known versions of the PC/AT BIOS released during the 5170 familys lifetime, corresponding to the three AT motherboard types. This one here is clearly not one of them: its date stamps and part numbers dont match, and the actual contents are substantially different besides. My first thought was that this may have come from one of those more shadowy members of the 5170 family: perhaps the AT/370, the 3270 AT/G(X), or the rack-mounted 7532 Industrial AT. But known examples of those carry the same firmware sets as the plain old 5170, so their BIOS extensions (if any) came in the shape of extra adapter ROMs. Whatever`this`thing was some other 5170-type machine, a prototype, or even just a custom patch it seemed Id have to inquire within for any further clues. ↫ VileR at the int10h.org blog Ill be honest and state that most of the in-depth analysis of the code dumped from the ROM chips is far too complex for me to follow, but that doesnt make the story it tells any less interesting. Theres no definitive, 100% conclusive answer at the end, but the available evidence collected by VileR does make a very strong case for a very specific, mysterious variant of the IBM PC being the likely source of the ROMs. If youre interested in some very deep IBM lore, heres your serving.
- The Microsoft SoftCard for the Apple II: getting two processors to share the same memory
We talked about the Z80 SoftCard, Microsofts first hardware product, back in 2023, but thanks to Raymond Chen and Nicole Branagan, weve got some more insights. The Microsoft Z-80 SoftCard was a plug-in expansion card for the Apple II that added the ability to run CP/M software. According to Wikipedia, it was Microsoft’s first hardware product and in 1980 was the single largest revenue source for the company. ↫ Raymond Chen at The Old New Thing And Chen links to an article by Branagan from 2020, which goes into even more detail. So there I was, very happy with my Apple ][plus. But then I saw someone on the internet post, and it seems that my Apple is an overpriced box with a toy microcontroller for a CPU, while real computers use an Intel 8080, 8085 or Zilog Z80 to run something called “CP/M”… but I’ve already spent so much money on the Apple, so can I turn it into a real computer? ↫ Nicole Branagan I have a soft spot for this particular subgenre of hardware add-in cards that allow you to run an entirely different architecture inside your computer and soon, Ill be diving into a particularly capable example here on OSNews.
- bluetui and restterm: two beautiful TUI applications
Theres something incredibly enticing and retrofuturistic about a well-designed TUI, or text-based user interface. Theres an endless list number of these, but two crossed my path these past few days, and I found them particularly appealing. First, weve got bluetui, an application for managing Bluetooth connections on Linux systems with bluez installed. The second is resterm. Resterm is a terminal-first client for working with`HTTP,GraphQL, and`gRPC`services. No cloud sync, no signups, no heavy desktop app. Simple, yet feature rich, terminal client for .http/.rest files. It pairs a Vim-like-style editor with a workspace explorer, response diff, history, profiler and scripting so you can iterate on requests without leaving the keyboard. ↫ restterm GitHub page I dont use TUIs or the command line in general all that much, but these are two excellent examples of just how beautiful and user-friendly a good text-based user interface can really be. The command line is about a lot more than just archaic, cryptic incantations designed in the 1960s.
- Sculpt OS 25.10 released
In the light of this years roadmap focus on rigidity, clarity, performance!, Sculpt OS 25.10 looks the same as the version 25.04 but might feel different as it includes countless under-the-hood improvements of the two preceding framework releases 25.05 and 25.08. User interaction on performance-starved platforms like the PinePhone has become visibly smoother thanks our recent CPU scheduling advances. The streamlined block-storage stack combined with various refinements of the package-installation mechanism make the on-target installation of 3rd-party components a bliss. Regarding supported hardware, we steadily follow the tireless work of the Linux kernel community. All PC driver components using Linux kernel code are now consistently based on kernel version 6.12. ↫ Sculpt OS 25.10 release announcement Theres also an optional brand new configuration format, which optionally replaces Scultps use of XML for this purpose. Norman Feske, one of the co-founders of Genode Labs, published an article detailing how to test this new format, which also goes much deeper into how it works. For Sculpt OS 25.10 release, Alexander Böttcher has also released an experimental image with five different kernel to choose from. The image is for PC, and works as a live system so theres no need to install it to explore Sculpt OS. Speaking of Alexander Böttcher, he also published an article about improvements and changes to Sculpt OS lockscreen component. This component has existed for a very long time, and has been improved considerably over the years, and Böttchers article details how to install it, configure it, and use it.

- The Most Critical Linux Kernel Breaches of 2025 So Far
by George Whittaker The Linux kernel, foundational for servers, desktops, embedded systems, and cloud infrastructure, has been under heightened scrutiny. Several vulnerabilities have been exploited in real-world attacks, targeting critical subsystems and isolation layers. In this article, we’ll walk through major examples, explain their significance, and offer actionable guidance for defenders. CVE-2025-21756 – Use-After-Free in the vsock Subsystem One of the most alarming flaws this year involves a use-after-free vulnerability in the Linux kernel’s vsock implementation (Virtual Socket), which enables communication between virtual machines and their hosts.
How the exploit works:A malicious actor inside a VM (or other privileged context) manipulates reference counters when a vsock transport is reassigned. The code ends up freeing a socket object while it’s still in use, enabling memory corruption and potentially root-level access.
Why it matters:Since vsock is used for VM-to-host and inter-VM communication, this flaw breaks a key isolation barrier. In multi-tenant cloud environments or container hosts that expose vsock endpoints, the impact can be severe.
Mitigation:Kernel maintainers have released patches. If your systems run hosts, hypervisors, or other environments where vsock is present, make sure the kernel is updated and virtualization subsystems are patched. CVE-2025-38236 – Out-of-Bounds / Sandbox Escape via UNIX Domain Sockets Another high-impact vulnerability involves the UNIX domain socket interface and the MSG_OOB flag. The bug was publicly detailed in August 2025 and is already in active discussion.
Attack scenario:A process running inside a sandbox (for example a browser renderer) can exploit MSG_OOB operations on a UNIX domain socket to trigger a use-after-free or out-of-bounds read/write. That allows leaking kernel pointers or memory and then chaining to full kernel privilege escalation.
Why it matters:This vulnerability is especially dangerous because it bridges from a low-privilege sandboxed process to kernel-level compromise. Many systems assume sandboxed code is safe; this attack undermines that assumption.
Mitigation:Distributions and vendors (like browser teams) have disabled or restricted MSG_OOB usage for sandboxed contexts. Kernel patches are available. Systems that run browser sandboxes or other sandboxed processes need to apply these updates immediately. CVE-2025-38352 – TOCTOU Race Condition in POSIX CPU Timers In September 2025, the U.S. Cybersecurity & Infrastructure Security Agency (CISA) added this vulnerability to its Known Exploited Vulnerabilities (KEV) catalog. Go to Full Article
- Steam Deck 2 Rumors Ignite a New Era for Linux Gaming
by George Whittaker The speculation around a successor to the Steam Deck has stirred renewed excitement, not just for a new handheld, but for what it signals in Linux-based gaming. With whispers of next-gen specs, deeper integration of SteamOS, and an evolving handheld PC ecosystem, these rumors are fueling broader hopes that Linux gaming is entering a more mature age. In this article we look at the existing rumors, how they tie into the Linux gaming landscape, why this matters, and what to watch. What the Rumours Suggest Although Valve has kept things quiet, multiple credible outlets report about the Steam Deck 2 being in development and potentially arriving well after 2026. Some of the key tid-bits:
Editorials note that Valve isn’t planning a mere spec refresh; it wants a “generational leap in compute without sacrificing battery life”. A leaked hardware slide pointed to an AMD “Magnus”-class APU built on Zen 6 architecture being tied to next-gen handhelds, including speculation about the Steam Deck 2. One hardware leaker (KeplerL2) cited a possible 2028 launch window for the Steam Deck 2, which would make it roughly 6 years after the original. Valve’s own design leads have publicly stated that a refresh with only 20-30% more performance is “not meaningful enough”, implying they’re waiting for a more substantial upgrade.
In short: while nothing is official yet, there’s strong evidence that Valve is working on the next iteration and wants it to be a noteworthy jump, not just a minor update. Why This Matters for Linux Gaming The rumoured arrival of the Steam Deck 2 isn’t just about hardware, it reflects and could accelerate key inflection points for Linux & gaming: Validation of SteamOS & Linux Gaming The original Steam Deck, running SteamOS (a Linux-based OS), helped prove that PC gaming doesn’t always require Windows. A well-received successor would further validate Linux as a first-class gaming platform, not a niche alternative but a mainstream choice. Handheld PC Ecosystem Momentum Since the first Deck, many Windows-based handhelds have entered the market (such as the ROG Ally, Lenovo Legion Go). Rumours of the Deck 2 keep spotlight on the form factor and raise expectations for Linux-native handhelds. This momentum helps encourage driver, compatibility and OS investments from the broader community. Go to Full Article
- Kali Linux 2025.3 Lands: Enhanced Wireless Capabilities, Ten New Tools & Infrastructure Refresh
by George Whittaker Introduction The popular penetration-testing distribution Kali Linux has dropped its latest quarterly snapshot: version 2025.3. This release continues the tradition of the rolling-release model used by the project, offering users and security professionals a refreshed toolkit, broader hardware support (especially wireless), and infrastructure enhancements under the hood. With this update, the distribution aims to streamline lab setups, bolster wireless hacking capabilities (particularly on Raspberry Pi devices), and integrate modern workflows including automated VMs and LLM-based tooling.
In this article, we’ll walk through the key highlights of Kali Linux 2025.3, how the changes affect users (both old and new), the upgrade path, and what to keep in mind for real-world deployment. What’s New in Kali Linux 2025.3 This snapshot from the Kali team brings several categories of improvements: tooling, wireless/hardware support, architecture changes, virtualization/image workflows, UI and plugin tweaks. Below is a breakdown of the major updates. Tooling Additions: Ten Fresh Packages One of the headline items is the addition of ten new security tools to the Kali repositories. These tools reflect shifts in the field, toward AI-augmented recon, advanced wireless simulation and pivoting, and updated attack surface coverage. Among the additions are:
Caido and Caido-cli – a client-server web-security auditing toolkit (graphical client + backend). Detect It Easy (DiE) – a utility for identifying file types, a useful tool in reverse engineering workflows. Gemini CLI – an open-source AI agent that integrates Google’s Gemini (or similar LLM) capabilities into the terminal environment. krbrelayx – a toolkit focused on Kerberos relaying/unconstrained delegation attacks. ligolo-mp – a multiplayer pivoting solution for network-lateral movement. llm-tools-nmap – allows large-language-model workflows to drive Nmap scans (automated/discovery). mcp-kali-server – configuration tooling to connect an AI agent to Kali infrastructure. patchleaks – a tool that detects security-fix patches and provides detailed descriptions (useful both for defenders and auditors). vwifi-dkms – enables creation of “dummy” Wi-Fi networks (virtual wireless interfaces) for advanced wireless testing and hacking exercises. Go to Full Article
- VMScape: Cracking VM-Host Isolation in the Speculative Execution Age & How Linux Patches Respond
by George Whittaker Introduction In the world of modern CPUs, speculative execution, where a processor guesses ahead on branches and executes instructions before the actual code path is confirmed, has long been recognized as a performance booster. However, it has also given rise to a class of vulnerabilities collectively known as “Spectre” attacks, where microarchitectural side states (such as the branch target buffer, caches, or predictor state) are mis-exploited to leak sensitive data.
Now, a new attack variant, dubbed VMScape, exposes a previously under-appreciated weakness: the isolation between a guest virtual machine and its host (or hypervisor) in the branch predictor domain. In simpler terms: a malicious VM can influence the CPU’s branch predictor in such a way that when control returns to the host, secrets in the host or hypervisor can be exposed. This has major implications for cloud security, virtualization environments, and kernel/hypervisor protections.
In this article we’ll walk through how VMScape works, the CPUs and environments it affects, how the Linux kernel and hypervisors are mitigating it, and what users, cloud operators and admins should know (and do). What VMScape Is & Why It MattersThe Basics of Speculative Side-Channels Speculative execution vulnerabilities like Spectre exploit the gap between architectural state (what the software sees as completed instructions) and microarchitectural state (what the CPU has done internally, such as cache loads, branch predictor updates, etc). Even when speculative paths are rolled back architecturally, side-effects in the microarchitecture can remain and be probed by attackers.
One of the original variants, Spectre-BTI (Branch Target Injection, also called Spectre v2) leveraged the Branch Target Buffer (BTB) / predictor to redirect speculative execution along attacker-controlled paths. Over time, hardware and software mitigations (IBRS, eIBRS, IBPB, STIBP) have been introduced. But VMScape shows that when virtualization enters the picture, the isolation assumptions break down. VMScape: Guest to Host via Branch Predictor VMScape (tracked as CVE‑2025‑40300) is described by researchers from ETH Zürich as “the first Spectre-based end-to-end exploit in which a malicious guest VM can leak arbitrary sensitive information from the host domain/hypervisor, without requiring host code modifications and in default configuration.”
Here are the key elements making VMScape significant:
The attack is cross-virtualization: a guest VM influences the host’s branch predictor state (not just within the guest). Go to Full Article
- Self-Tuning Linux Kernels: How LLM-Driven Agents Are Reinventing Scheduler Policies
by George Whittaker Introduction Modern computing systems rely heavily on operating-system schedulers to allocate CPU time fairly and efficiently. Yet many of these schedulers operate blindly with respect to the meaning of workloads: they cannot distinguish, for example, whether a task is latency-sensitive or batch-oriented. This mismatch, between application semantics and scheduler heuristics, is often referred to as the semantic gap.
A recent research framework called SchedCP aims to close that gap. By using autonomous LLM‐based agents, the system analyzes workload characteristics, selects or synthesizes custom scheduling policies, and safely deploys them into the kernel, without human intervention. This represents a meaningful step toward self-optimizing, application-aware kernels.
In this article we will explore what SchedCP is, how it works under the hood, the evidence of its effectiveness, real-world implications, and what caveats remain. Why the Problem Matters At the heart of the issue is that general-purpose schedulers (for example the Linux kernel’s default policy) assume broad fairness, rather than tailoring scheduling to what your application cares about. For instance:
A video-streaming service may care most about minimal tail latency. A CI/CD build system may care most about throughput and job completion time. A cloud analytics job may prefer maximum utilisation of cores with less concern for interactive responsiveness.
Traditional schedulers treat all tasks mostly the same, tuning knobs generically. As a result, systems often sacrifice optimisation opportunities. Some prior efforts have used reinforcement-learning techniques to tune scheduler parameters, but these approaches have limitations: slow convergence, limited generalisation, and weak reasoning about why a workload behaves as it does.
SchedCP starts from the observation that large language models can reason semantically about workloads (expressed in plain language or structured summaries), propose new scheduling strategies, and generate code via eBPF that is loaded into the kernel via the sched_ext interface. Thus, a custom scheduler (or modified policy) can be developed specifically for a given workload scenario, and in a self-service, automated way. Architecture & Key Components SchedCP comprises two primary subsystems: a control-plane framework and an agent loop that interacts with it. The framework decouples “what to optimise” (reasoning) from “how to act” (execution) in order to preserve kernel stability while enabling powerful optimisations.
Here are the major components: Go to Full Article
- Bcachefs Ousted from Mainline Kernel: The Move to DKMS and What It Means
by George Whittaker Introduction After years of debate and development, bcachefs—a modern copy-on-write filesystem once merged into the Linux kernel—is being removed from mainline. As of kernel 6.17, the in-kernel implementation has been excised, and future use is expected via an out-of-tree DKMS module. This marks a turning point for the bcachefs project, raising questions about its stability, adoption, and relationship with the kernel development community.
In this article, we’ll explore the background of bcachefs, the sequence of events leading to its removal, the technical and community dynamics involved, and implications for users, distributions, and the filesystem’s future. What Is Bcachefs? Before diving into the removal, let’s recap what bcachefs is and why it attracted attention.
Origin & goals: Developed by Kent Overstreet, bcachefs emerged from ideas in the earlier bcache project (a block-device caching layer). It aimed to build a full-featured, general-purpose filesystem combining performance, reliability, and modern features (snapshots, compression, encryption) in a coherent design. Mainline inclusion: Bcachefs was merged into the mainline kernel in version 6.7 (released January 2024) after a lengthy review and incubation period. “Experimental” classification: Even after being part of the kernel, bcachefs always carried disclaimers about its maturity and stability—they were not necessarily recommends for production use by all users.
Its presence in mainline gave distributions a path to ship it more casually, and users had easier access without building external modules—an important convenience for adoption. What Led to the Removal The excision of bcachefs from the kernel was not sudden but the culmination of tension over development practices, patch acceptance timing, and upstream policy norms. “Externally Maintained” status in 6.17 In kernel 6.17’s preparation, maintainers marked bcachefs as “externally maintained.” Though the code remained present, the change signified that upstream would no longer accept new patches or updates within the kernel tree.
This move allowed a transitional period. The code was “frozen” inside the tree to avoid breaking existing systems immediately, while preparation was made for future removal. Go to Full Article
- Linux Mint 22.2 ‘Zara’ Released: Polished, Modern, and Built for Longevity
by George Whittaker Introduction The Linux Mint team has officially unveiled Linux Mint 22.2, codenamed “Zara”, on September 4, 2025. As a Long-Term Support (LTS) release, Zara will receive updates through 2029, promising users stability, incremental improvements, and a comfortable desktop experience.
This version is not about flashy overhauls; rather, it’s about refinement — applying polish to existing features, smoothing rough edges, weaving in new conveniences (like fingerprint login), and improving compatibility with modern hardware. Below, we’ll delve into what’s new in Zara, what users should know before upgrading, and how it continues Mint’s philosophy of combining usability, reliability, and elegance. What’s New in Linux Mint 22.2 “Zara” Here’s a breakdown of key changes, refinements, and enhancements in Zara. Base, Support & Kernel Stack Ubuntu 24.04 (Noble) base: Zara continues to use Ubuntu 24.04 as its upstream base, ensuring broad package compatibility and long-term security support. Kernel 6.14 (HWE): The default kernel for new installations is 6.14, bringing support for newer hardware. However — for existing systems upgraded from Mint 22 or 22.1 — the older kernel (6.8 LTS) remains the default, because 6.14’s support window is shorter. Zara is an LTS edition, with security updates and maintenance promised through 2029. Major Features & EnhancementsFingerprint Authentication via Fingwit Zara introduces a first-party tool called Fingwit to manage fingerprint-based authentication. With compatible hardware and support via the libfprint framework, users can:
Enroll fingerprints Use fingerprint login for the screensaver Authenticate sudo commands Launch administrative tools via pkexec using the fingerprint In some cases, bypass password entry at login (unless home directory encryption or keyring constraints force password fallback)
It is important to note that fingerprint login on the actual login screen may be disabled or limited depending on encryption or keyring usage; in those cases, the system falls back to password entry. UI & Theming Refinements Sticky Notes app now sports rounded corners, improved Wayland compatibility, and a companion Android app named StyncyNotes (available via F-Droid) to sync notes across devices. Go to Full Article
- Ubuntu Update Backlog: How a Brief Canonical Outage Cascaded into Multi-Day Delays
by George Whittaker Introduction In early September 2025, Ubuntu users globally experienced disruptive delays in installing updates and new packages. What seemed like a fleeting outage—only about 36 minutes of server downtime—triggered a cascade of effects: mirrors lagging, queued requests overflowing, and installations hanging for days. The incident exposed how fragile parts of Ubuntu’s update infrastructure can be under sudden load.
In this article, we’ll walk through what happened, why the fallout was so severe, how Canonical responded, and lessons for users and infrastructure architects alike. What Happened: Outage & Immediate Impact On September 5, 2025, Canonical’s archive servers—specifically archive.ubuntu.com and security.ubuntu.com—suffered an unplanned outage. The status page for Canonical showed the incident lasting roughly 36 minutes, after which operations were declared “resolved.”
However, that brief disruption set off a domino effect. Because the archives and security servers serve as the central hubs for Ubuntu’s package ecosystem, any downtime causes massive backlog among mirror servers and client requests. Mirrors found themselves out of sync, processing queues piled up, and users attempting updates or new installs encountered failed downloads, hung operations, or “404 / package not found” errors.
On Ubuntu’s community forums, Canonical acknowledged that while the server outage was short, the upload / processing queue for security and repository updates had become “obscenely” backlogged. Users were urged to be patient, as there was no immediate workaround.
Throughout September 5–7, users continued reporting incomplete or failed updates, slow mirror responses, and installations freezing mid-process. Even newly provisioning systems faced broken repos due to inconsistent mirror states.
By September 8, the situation largely stabilized: mirrors caught up, package availability resumed, and normal update flows returned. But the extended period of degraded service had already left many users frustrated. Why a Short Outage Turned into Days of Disruption At first blush, 36 minutes seems trivial. Why did it have such prolonged consequences? Several factors contributed:
Centralized repository backplane Ubuntu’s infrastructure is architected around central canonical repositories (archive, security) which then propagate to mirrors worldwide. When the central system is unavailable, mirrors stop receiving updates and become stale. Go to Full Article
- Bringing Desktop Linux GUIs to Android: The Next Step in Graphical App Support
by George Whittaker Introduction Android has long been focused on running mobile apps, but in recent years, features aimed at developers and power users have begun pushing its boundaries. One exciting frontier: running full Linux graphical (GUI) applications on Android devices. What was once a novelty is now gradually becoming more viable, and recent developments point toward much smoother, GPU-accelerated Linux GUI experiences on Android.
In this article, we’ll trace how Linux apps have run on Android so far, explain the new architecture changes enabling GPU rendering, showcase early demonstrations, discuss remaining hurdles, and look at where this capability is headed. The State of Linux on Android TodayThe Linux Terminal App Google’s Linux Terminal app is the core interface for running Linux environments on Android. It spins up a virtual machine (VM), often booting Debian or similar, and lets users enter a shell, install packages, run command-line tools, etc.
Initially, the app was limited purely to text / terminal-based Linux programs; graphical apps were not supported meaningfully. More recently, Google introduced support for launching GUI Linux applications in experimental channels. Limitations: Rendering & Performance Even now, most GUI Linux apps on Android are rendered in software, that is, all drawing happens on the CPU (via a software renderer) rather than using the device’s GPU. This leads to sluggish UI, high CPU usage, more thermal stress, and shorter battery life.
Because of these limitations, running heavy GUI apps (graphics editors, games, desktop-level toolkits) has been more experimental than practical. What’s Changing: GPU-Accelerated Rendering The big leap forward is moving from CPU rendering to GPU-accelerated rendering, letting the device’s graphics hardware do the heavy lifting. Lavapipe (Current Baseline) At present, the Linux VM uses Lavapipe (a Mesa software rasterizer) to interpret GPU API calls on the CPU. This works, but is inefficient, especially for complex GUIs or animations. Introducing gfxstream Google is planning to integrate gfxstream into the Linux Terminal app. gfxstream is a GPU virtualization / forwarding technology: rather than reinterpreting graphics calls in software, it forwards them from the guest (Linux VM) to the host’s GPU directly. This avoids CPU overhead and enables near-native rendering speeds. Go to Full Article
- Fedora 43 Beta Released: A Preview of What's Ahead
by George Whittaker Introduction Fedora’s beta releases offer one of the earliest glimpses into the next major version of the distribution — letting users and developers poke, test, and report issues before the final version ships. With Fedora 43 Beta, released on September 16, 2025, the community begins the final stretch toward the stable Fedora 43.
This beta is largely feature-complete: developers hope it will closely match what the final release looks like (barring last-minute fixes). The goal is to surface regression bugs, UX issues, and compatibility problems before Fedora 43 is broadly adopted. Release & Availability The Fedora Project published the beta across multiple editions and media — Workstation, KDE Plasma, Server, IoT, Cloud, and spins/labs where applicable. ISO images are available for download from the official Fedora servers.
Users already running Fedora 42 can upgrade via the DNF system-upgrade mechanism. Some spins (e.g. Mate or i3) are not fully available across all architectures yet.
Because it’s a beta, users should be ready to encounter bugs. Fedora encourages testers to file issues via the QA mailing list or Fedora’s issue tracking infrastructure. Major New Features & Changes Fedora 43 Beta brings many updates under the hood — some in visible user features, others in core tooling and system behavior. Kernel, Desktop & Session Updates Fedora 43 Beta is built on Linux kernel 6.17. The Workstation edition features GNOME 49. In a bold shift, Fedora removes GNOME X11 packages for the Workstation, making Wayland-only the default and only session for GNOME. Existing users are migrated to Wayland. On KDE, Fedora 43 Beta ships with KDE Plasma 6.4 in the Plasma edition. Installer & Package Management Fedora’s Anaconda installer gets a WebUI by default for all Spins, providing a more unified and modern install experience across desktop variants. The installer now uses DNF5 internally, phasing out DNF4 which is now in maintenance mode. Auto-updates are enabled by default in Fedora Kinoite, ensuring that systems apply updates seamlessly in the background with minimal user intervention. Programming & Core Tooling Updates The Python version in Fedora 43 Beta moves to 3.14, an early adoption to catch bugs before the upstream release. Go to Full Article
|