Recent Changes - Search:
NTLUG

Linux is free.
Life is good.

Linux Training
10am on Meeting Days!

1825 Monetary Lane Suite #104 Carrollton, TX

Do a presentation at NTLUG.

What is the Linux Installation Project?

Real companies using Linux!

Not just for business anymore.

Providing ready to run platforms on Linux

Show Descriptions... (Show All) (Two Column)

LinuxSecurity - Security Advisories







LWN.net


  • About KeePassXC's code quality control (KeePassXC blog)
    The KeePassXC project has recently updated its contributionpolicy and READMEto note its policy around contributions created with generative AItools. The project's use of those tools, such as GitHub Copilot, haveraised a number of questions and concerns, which the project hasrespondedto:

    There are no AI features inside KeePassXC and there neverwill be!

    The use of Copilot for drafting pull requests is reserved for verysimple and focused tasks with a small handful of changes, such assimple bugfixes or UI changes. We use it sparingly (mostly becauseit's not very good at complex tasks) and only where we think it offersa benefit. Copilot is good at helping developers plan complex changesby reviewing the code base and writing suggestions in markdown, aswell as boilerplate tasks such as test development. Copilot can messup, and we catch that in our standard review process (e.g., bycommitting a full directory of rubbish, which we identified andfixed). You can review our copilot instructions. Would we ever let AIrewrite our crypto stack? No. Would we let it refactor and rewritelarge parts of the application? No. Would we ask it to fix aregression or add more test cases? Yes, sometimes.

    Emphasis in the original. See the full post to learn more about theproject's processes and pull requests that have been created with AIassistance.



  • A proposed kernel policy for LLM-generated contributions
    The kernel community is currently reviewing aproposed policy for contributors who are using large language models toassist in the creation of their patches; the primary focus is on disclosureof the use of those tools. "The goal here is to clarify communityexpectations around tools. This lets everyone become more productive whilealso maintaining high degrees of trust between submitters andreviewers."


  • [$] Bootc for workstation use
    The bootc project allows users tocreate a bootable Linux system image using the container tooling that manydevelopers are already familiar with. It is an evolution of OSTree(now called libostree), which is used to create FedoraSilverblue and other image-based distributions. While creatingcustom images is still a job for experts, the container technologysimplifies delivering heavily customized images to non-technicalusers.


  • Security updates for Friday
    Security updates have been issued by AlmaLinux (bind, bind9.16, libsoup, mariadb:10.5, and sssd), Debian (chromium, keystone, and swift), Fedora (apptainer, buildah, chromium, fcitx5, fcitx5-anthy, fcitx5-chewing, fcitx5-chinese-addons, fcitx5-configtool, fcitx5-hangul, fcitx5-kkc, fcitx5-libthai, fcitx5-m17n, fcitx5-qt, fcitx5-rime, fcitx5-sayura, fcitx5-skk, fcitx5-table-extra, fcitx5-unikey, fcitx5-zhuyin, GeographicLib, libime, mbedtls, mingw-poppler, mupen64plus, python-starlette, webkitgtk, and xen), Mageia (dcmtk, java-1.8.0-openjdk, java-11-openjdk, java-17-openjdk, java-latest-openjdk, libvpx, and sqlite3), Oracle (bind, bind9.16, kernel, libsoup, libsoup3, osbuild-composer, qt6-qtsvg, sssd, and valkey), Red Hat (kernel and kernel-rt), SUSE (bind, gpg2, ImageMagick, python-Django, and runc), and Ubuntu (linux-azure, linux-azure-4.15, linux-fips, linux-aws-fips, inux-gcp-fips, linux-gcp, linux-gcp-6.8, linux-gke, linux-intel-iot-realtime, linux-realtime, linux-raspi-5.4, and linux-realtime, linux-realtime-6.8).


  • Mastodon 4.5 released
    Version4.5 of the Mastodondecentralized social-media platform has been released. Notablefeatures in this release include quoteposts, native emoji support, as well as enhanced moderation andblocking features for server administrators. The project also has a postdetailing new features in 4.5 for developers of clients and othersoftware that interacts with Mastodon.



  • Freedesktop.org now hosts the Filesystem Hierarchy Standard
    The future of the Filesystem Hierarchy Standard (FHS) has been under discussion for some time; now,Neal Gompa has announcedthat the FHS is "hosted and stewarded" by Freedesktop.org.
    For those who are unaware, the Filesystem Hierarchy Standard (FHS) is the definition for POSIX operating systems to organize system and user data. It is broadly adopted by Linux, BSD, and other operating systems that follow POSIX-like conventions.
    See thispage for the specification's new home.


  • [$] Toward fast, containerized, user-space filesystems
    Filesystems are complex and performance-sensitive beasts. They can alsopresent security concerns. Microkernel-based systems have long pushedfilesystems into separate processes in order to contain any vulnerabilitiesthat may be found there. Linux can do the same with the Filesystem inUserspace (FUSE) subsystem, but using FUSE brings a significantperformance penalty. Darrick Wong is working on ways to eliminate thatpenalty, and he has a massive patchset showing how ext4 filesystems can be safely implemented in user space byunprivileged processes with good performance. This work has the potentialto radically change how filesystems are managed on Linux systems.


  • Security updates for Thursday
    Security updates have been issued by Debian (unbound), Fedora (deepin-qt5integration, deepin-qt5platform-plugins, dtkcore, dtkgui, dtklog, dtkwidget, fcitx-qt5, fcitx5-qt, fontforge, gammaray, golang-github-openprinting-ipp-usb, kddockwidgets, keepassxc, kf5-akonadi-server, kf5-frameworkintegration, kf5-kwayland, plasma-integration, python-qt5, qadwaitadecorations, qt5, qt5-qt3d, qt5-qtbase, qt5-qtcharts, qt5-qtconnectivity, qt5-qtdatavis3d, qt5-qtdeclarative, qt5-qtdoc, qt5-qtgamepad, qt5-qtgraphicaleffects, qt5-qtimageformats, qt5-qtlocation, qt5-qtmultimedia, qt5-qtnetworkauth, qt5-qtquickcontrols, qt5-qtquickcontrols2, qt5-qtremoteobjects, qt5-qtscript, qt5-qtscxml, qt5-qtsensors, qt5-qtserialbus, qt5-qtserialport, qt5-qtspeech, qt5-qtsvg, qt5-qttools, qt5-qttranslations, qt5-qtvirtualkeyboard, qt5-qtwayland, qt5-qtwebchannel, qt5-qtwebengine, qt5-qtwebkit, qt5-qtwebsockets, qt5-qtwebview, qt5-qtx11extras, qt5-qtxmlpatterns, qt5ct, and xorg-x11-server), Mageia (binutils, gstreamer1.0-plugins-bad, libsoup, libsoup3, mediawiki, net-tools, and tigervnc, x11-server, and x11-server-xwayland), Red Hat (tigervnc), SUSE (aws-efs-utils, fetchmail, flake-pilot, ImageMagick, java-1_8_0-ibm, java-1_8_0-openjdk, kernel-devel, kubecolor, OpenSMTPD, sccache, tiff, and zellij), and Ubuntu (linux, linux-aws, linux-aws-6.14, linux-gcp, linux-gcp-6.14, linux-oem-6.14, linux-oracle, linux-oracle-6.14, linux-raspi, linux-realtime, linux, linux-aws, linux-gkeop, linux-hwe-6.8, linux-ibm, linux-ibm-6.8, linux-lowlatency, linux-lowlatency-hwe-6.8, linux-nvidia, linux-nvidia-lowlatency, linux, linux-aws, linux-kvm, linux-lts-xenial, linux-oracle-6.8, linux-realtime-6.14, poppler, python-django, and various linux-* packages).


  • [$] LWN.net Weekly Edition for November 6, 2025
    Inside this week's LWN.net Weekly Edition:
    Front: Python thread safety; Namespace reference counting; Merigraf; Speeding up short reads; Julia 1.12; systemd security. Briefs: CHERIoT 1.0; Chromium XSLT; Arm KASLR; Bazzite; Devuan 6.0; Incus 6.18; LXQt 2.3.0; Rust 1.91.0; Quotes; ... Announcements: Newsletters, conferences, security updates, patches, and more.


LXer Linux News





  • What is /dev/null in Linux?
    The “/dev/null” file is a special file that can be found in all Linux systems. They are also referred to as “null device files,” “void,” and sometimes “a black hole of Linux”.



  • 9to5Linux Weekly Roundup: November 9th, 2025
    The 265th installment of the 9to5Linux Weekly Roundup is here for the week ending on November 9th, 2025, keeping you updated with the most important things happening in the Linux world.





Error: It's not possible to reach RSS file http://services.digg.com/2.0/story.getTopNews?type=rss&topic=technology ...

Slashdot

  • NVIDIA Connects AI GPUs to Early Quantum Processors
    "Quantum computing is still years away, but Nvidia just built the bridge that will bring it closer..." argues investment site The Motley Fool, "by linking today's fastest AI GPUs with early quantum processors..." NVIDIA's new hybrid system strengthens communication at microsecond speeds — orders of magnitude faster than before — "allowing AI to stabilize and train quantum machines in real time, potentially pulling major breakthroughs years forward."CUDA-Q, Nvidia's open-source software layer, lets researchers choreograph that link — running AI models, quantum algorithms, and error-correction routines together as one system. That jump allows artificial intelligence to monitor [in real time]... For researchers, that means hundreds of new iterations where there used to be one — a genuine acceleration of discovery. It's the quiet kind of progress engineers love — invisible, but indispensable... Its GPUs (graphics processing units) are already tuned for the dense, parallel calculations these explorations demand, making them the natural partner for any emerging quantum processor... Other companies chase better quantum hardware — superconducting, photonic, trapped-ion — but all of them need reliable coordination with the computing power we already have. By offering that link, Nvidia turns its GPU ecosystem into the operating environment of hybrid computing, the connective tissue between what exists now and what's coming next. And because the system is open, every new lab or start-up that connects strengthens Nvidia's position as the default hub for quantum experimentation... There's also a defensive wisdom in this move. If quantum computing ever matures, it could threaten the same data center model that built Nvidia's empire. CEO Jensen Huang seems intent on making sure that, if the future shifts, Nvidia already sits at its center. By owning the bridge between today's technology and tomorrow's, the company ensures it earns relevance — and revenue — no matter which computing model dominates. So Nvidia's move "isn't about building a quantum computer," the article argues, "it's about owning the bridge every quantum effort will need."


    Read more of this story at Slashdot.


  • Rust Foundation Announces 'Maintainers Fund' to Ensure Continuity and Support Long-Term Roles
    The Rust Foundation has a responsibility to "shed light on the impact of supporting the often unseen work" that keeps the Rust Project running. So this week they announced a new initiative "to provide consistent, transparent, and long term support for the developers who make the Rust programming language possible." It's the Rust Foundation Maintainers Fund, "an initiative we'll shape in close collaboration with the Rust Project Leadership Council and Project Directors to ensure funding decisions are made openly and with accountability." In the months ahead, we'll define the fund's structure, secure contributions, and work with the Rust Project and community to bring it to life. This work will build on lessons from earlier iterations of our grants and fellowships to create a lasting framework for supporting Rust's maintainers... Over the past several months, through ongoing board discussions and input from the Leadership Council, this initiative has taken shape as a way to help maintainers continue their vital development and review work, and plan for the future... This initiative reflects our commitment to Rust being shaped by its people, guided by open collaboration, and backed by a global network of contributors and partners. The Rust Foundation Maintainers Fund will operate within the governance framework shared between the Rust Project and the Rust Foundation, ensuring alignment and oversight at every level... The Rust Foundation's approach to this initiative will be guided by our structure: as a 501( C)(6) nonprofit, we operate under a mandate for transparency and accountability to the Rust Project, language community, and our members. That means we must develop this fund in coordination with the Rust Project's priorities, ensuring shared governance and long-term viability... Our goal is simple: to help the people building Rust continue their essential work with the support they deserve. That means creating the conditions for long term maintainer roles and ensuring continuity for those whose efforts keep the language stable and evolving. Through the Rust Foundation Maintainers Fund, we aim to address these needs directly. "The more companies using Rust can contribute to the Rust Foundation Maintainers Fund, the more we can keep the language and tooling evolving for the benefit of everyone," says Rust Foundation project director Carol Nichols.


    Read more of this story at Slashdot.


  • Nonprofit Releases Thousands of Rare American Music Recordings Online
    The nonprofit Dust-to-Digital Foundation is making thousands of historic songs accessible to the public for free through a new partnership with the University of California, Santa Barbara. The songs represent "some of the rarest and most uniquely American music borne from the Jazz Age and the Great Depression," according to the university, and classic blues recordings or tracks by Fiddlin' John Carson and his daughter Moonshine Kate "would have likely been lost to landfills and faded from memory." Launched in 1999 by Lance and April Ledbetter, Dust-to-Digital focused on preserving hard-to-find music. Originally a commercial label producing high-quality box sets (along with CDs, records, and books), it established a nonprofit foundation in 2010, working closely with collectors to digitize and preserve record collections. And there's an interesting story about how they became familiar with library curator David Seubert...Once a relationship is established, Dust-to-Digital sets up special turntables and laptops in a collector's home, with paid technicians painstakingly digitizing and labeling each record, one song at a time. Depending on the size of the collection, the process can take months, even years... In 2006, they heard about Seubert's Cylinder Preservation and Digitization Project getting "slashdotted," a term that describes when a website crashes or receives a sudden and debilitating spike in traffic after being mentioned in an article on Slashdot. Here in 2025, the university's library already has over 50,000 songs in a Special Research Collections, which they've been uploading it to a Discography of American Historical Recordings (DAHR) database. ("Recordings in the public domain are also available for free download, in keeping with the UCSB Library's mission for open access.") Over 5,000 more songs from Dust-to-Digital have already been added, says library curator Seubert, and "Thousands more are in the pipeline." One interest detail? The bulk of the new songs come from Joe Bussard, a man whose 75-year obsession with record collecting earned him the name "the king of the record collectors and "the saint of 78s".


    Read more of this story at Slashdot.


  • What Happens When Humans Start Writing for AI?
    The literary magazine of the Phi Beta Kappa society argues "the replacement of human readers by AI has lately become a real possibility. "In fact, there are good reasons to think that we will soon inhabit a world in which humans still write, but do so mostly for AI.""I write about artificial intelligence a lot, and lately I have begun to think of myself as writing for Al as well," the influential economist Tyler Cowen announced in a column for Bloomberg at the beginning of the year. He does this, he says, because he wants to boost his influence over the world, because he wants to help teach the AIs about things he cares about, and because, whether he wants to or not, he's already writing for AI, and so is everybody else. Large-language-model (LLM) chatbots such as ChatGPT and Claude are trained, in part, by reading the entire internet, so if you put anything of yourself online, even basic social-media posts that are public, you're writing for them. If you don't recognize this fact and embrace it, your work might get left behind or lost. For 25 years, search engines knit the web together. Anyone who wanted to know something went to Google, asked a question, clicked through some of the pages, weighed the information, and came to an answer. Now, the chatbot genie does that for you, spitting the answer out in a few neat paragraphs, which means that those who want to affect the world needn't care much about high Google results anymore. What they really want is for the AI to read their work, process it, and weigh it highly in what it says to the millions of humans who ask it questions every minute. How do you get it to do this? For that, we turn to PR people, always in search of influence, who are developing a form of writing (press releases and influence campaigns are writing) that's not so much search-engine-optimized as chatbot-optimized. It's important, they say, to write with clear structure, to announce your intentions, and especially to include as many formatted sections and headings as you can. In other words, to get ChatGPT to pay attention, you must write more like ChatGPT. It's also possible that, since LLMs understand natural language in a way traditional computer programs don't, good writing will be more privileged than the clickbait Google has succumbed to: One refreshing discovery PR experts have made is that the bots tend to prioritize information from high-quality outlets. Tyler Cowen also wrote in his Bloomberg column that "If you wish to achieve some kind of intellectual immortality, writing for the Als is probably your best chance.... Give the Als a sense not just of how you think, but how you feel — what upsets you, what you really treasure. Then future Al versions of you will come to life that much more, attracting more interest." Has AI changed the reasons we write? The Phi Beta Kappa magazine is left to consider the possibility that "power over a superintelligent beast and resurrection are nothing to sneeze at" — before offering another thought. "The most depressing reason to write for AI is that unlike most humans, AIs still read. They read a lot. They read everything. Whereas, aided by an AI no more advanced than the TikTok algorithm, humans now hardly read anything at all..."


    Read more of this story at Slashdot.


  • Apple Explores New Satellite Features for Future iPhones
    In 2022 the iPhone 14 featured emergency satellite service, and there's now support for roadside assistance and the ability to send and receive text messages. But for future iPhones, Apple is now reportedly working on five new satellite features, reports LiveMint: As per Bloomberg's Mark Gurman, Apple is building an API that would allow developers to add satellite connections to their own apps. However, the implementation is said to depend on app makers, and not every feature or service may be compatible with this system. The iPhone maker is also reportedly working on bringing satellite connectivity to Apple Maps, which would give users the chance to navigate without having access to a SIM card or Wi-Fi. The company is also said to be working on improved satellite messages that could support sending photos and not be limited to just text messages. Apple currently relies on the satellite network run by Globalstar to power current features on iPhones. However, the company is said to be exploring a potential sale, and Elon Musk's SpaceX could be a possible purchaser. The Mac Observer notes Bloomberg also reported Apple "has discussed building its own satellite service instead of depending on partners." And while some Apple executives pushed back, "the company continues to fund satellite research and infrastructure upgrades with the goal of offering a broader range of features." And "Future iPhones will use satellite links to extend 5G coverage in low-signal regions, ensuring that users remain connected even when cell towers are out of range.... Apple's slow but steady progress shows how the company wants iPhone satellite technology to move from emergency use to everyday convenience."


    Read more of this story at Slashdot.


  • Genetically Engineered Babies Are Banned in the US. But Tech Titans Are Trying to Make One Anyway
    "For months, a small company in San Francisco has been pursuing a secretive project: the birth of a genetically engineered baby," reports the Wall Street Journal:Backed by OpenAI chief executive Sam Altman and his husband, along with Coinbase co-founder and CEO Brian Armstrong, the startup — called Preventive — has been quietly preparing what would amount to a biological first. They are working toward creating a child born from an embryo edited to prevent a hereditary disease.... Editing genes in embryos with the intention of creating babies from them is banned in the U.S. and many countries. Preventive has been searching for places to experiment where embryo editing is allowed, including the United Arab Emirates, according to correspondence reviewed by The Wall Street Journal... Preventive is in the vanguard of a growing number of startups, funded by some of the most powerful people in Silicon Valley, that are pushing the boundaries of fertility and working to commercialize reproductive genetic technologies. Some are working on embryo editing, while others are already selling genetic screening tools that seek to account for the influence of dozens or hundreds of genes on a trait. They say their ultimate goal is to produce babies who are free of genetic disease and resilient against illnesses. Some say they can also give parents the ability to choose embryos that will have higher IQs and preferred traits such as height and eye color. Armstrong, the cryptocurrency billionaire, is leading the charge to make embryo editing a reality. He has told people that gene-editing technology could produce children who are less prone to heart disease, with lower cholesterol and stronger bones to prevent osteoporosis. According to documents and people briefed on his plans, he is already an investor or in talks with embryo editing ventures... After the Journal approached people close to the company last month to ask about its work, Preventive announced on its website that it had raised $30 million in investment to explore embryo editing. The statement pledged not to advance to human trials "if safety cannot be established through extensive research..." Other embryo editing startups are Manhattan Genomics, co-founded by Thiel Fellow Cathy Tie, and Bootstrap Bio, which plans to conduct tests in Honduras. Both companies are in early stages. The article notes the only known instance of children born from edited embryos was in 2018, when Chinese scientist He Jiankui "shocked the world with news that he had produced three children genetically altered as embryos to be immune to HIV. He was sentenced to prison in China for three years for the illegal practice of medicine. "He hasn't publicly shared the children's identities but says they are healthy.


    Read more of this story at Slashdot.


  • Python Foundation Donations Surge After Rejecting Grant - But Sponsorships Still Needed
    After the Python Software Foundation rejected a $1.5 million grant because it restricted DEI activity, "a flood of new donations followed," according to a new report. By Friday they'd raised over $157,000, including 295 new Supporting Members paying an annual $99 membership fee, says PSF executive director Deb Nicholson. "It doesn't quite bridge the gap of $1.5 million, but it's incredibly impactful for us, both financially and in terms of feeling this strong groundswell of support from the community."Could that same security project still happen if new funding materializes? The PSF hasn't entirely given up. "The PSF is always looking for new opportunities to fund work benefiting the Python community," Nicholson told me in an email last week, adding pointedly that "we have received some helpful suggestions in response to our announcement that we will be pursuing." And even as things stand, the PSF sees itself as "always developing or implementing the latest technologies for protecting PyPI project maintainers and users from current threats," and it plans to continue with that commitment. The Python Software Foundation was "astounded and deeply appreciative at the outpouring of solidarity in both words and actions," their executive director wrote in a new blog post this week, saying the show of support "reminds us of the community's strength." But that post also acknowledges the reality that the Python Software Foundation's yearly revenue and assets (including contributions from major donors) "have declined, and costs have increased,..."Historically, PyCon US has been a source of revenue for the PSF, enabling us to fund programs like our currently paused Grants Program... Unfortunately, PyCon US has run at a loss for three years — and not from a lack of effort from our staff and volunteers! Everyone has been working very hard to find areas where we can trim costs, but even with those efforts, inflation continues to surge, and changing U.S. and economic conditions have reduced our attendance...Because we have so few expense categories (the vast majority of our spending goes to running PyCon US, the Grants Program, and our small 13-member staff), we have limited "levers to pull" when it comes to budgeting and long-term sustainability... While Python usage continues to surge, "corporate investment back into the language and the community has declined overall. The PSF has longstanding sponsors and partners that we are ever grateful for, but signing on new corporate sponsors has slowed." (They're asking employees at Python-using companies to encourage sponsorships.)We have been seeking out alternate revenue channels to diversify our income, with some success and some challenges. PyPI Organizations offers paid features to companies (PyPI features are always free to community groups) and has begun bringing in monthly income. We've also been seeking out grant opportunities where we find good fits with our mission.... We currently have more than six months of runway (as opposed to our preferred 12 months+ of runway), so the PSF is not at immediate risk of having to make more dramatic changes, but we are on track to face difficult decisions if the situation doesn't shift in the next year. Based on all of this, the PSF has been making changes and working on multiple fronts to combat losses and work to ensure financial sustainability, in order to continue protecting and serving the community in the long term. Some of these changes and efforts include: — Pursuing new sponsors, specifically in the AI industry and the security sector — Increasing sponsorship package pricing to match inflation — Making adjustments to reduce PyCon US expenses — Pursuing funding opportunities in the US and Europe — Working with other organizations to raise awareness — Strategic planning, to ensure we are maximizing our impact for the community while cultivating mission-aligned revenue channels The PSF's end-of-year fundraiser effort is usually run by staff based on their capacity, but this year we have assembled a fundraising team that includes Board members to put some more "oomph" behind the campaign. We'll be doing our regular fundraising activities; we'll also be creating a unique webpage, piloting temporary and VERY visible pop-ups to python.org and PyPI.org, and telling more stories from our Grants Program recipients... Keep your eyes on the PSF Blog, the PSF category on Discuss, and our social media accounts for updates and information as we kick off the fundraiser this month. Your boosts of our posts and your personal shares of "why I support the PSF" stories will make all the difference in our end-of-year fundraiser. If this post has you all fired up to personally support the future of Python and the PSF right now, we always welcome new PSF Supporting Members and donations.


    Read more of this story at Slashdot.


  • Blue Origin Postpones Attempt to Launch Unique ''EscaPADE' Orbiters to Mars
    UPDATE (1:16 PST) Today's launch has been scrubbed due to weather, and Blue Origin is now reviewing opportunities for new launch windows. Sunday Morning Blue Origin livestreamed the planned launch of its New Glenn rocket, which will carry a very unique mission for NASA. "Twin spacecraft are set to take off on an unprecedented, winding journey to Mars," reports CNN, "where they will investigate why the barren red planet began to lose its atmosphere billions of years ago." By observing two Mars locations simultaneously, this mission can measure how Mars responds to space weather in real time — and how the Martian magnetosphere changes...Called EscaPADE, the mission will aim for an orbital trajectory that has never been attempted before, according to aerospace company Advanced Space, which is supporting the project. If successful, it could be a crucial case study that can allow extraordinary flexibility for planetary science missions down the road. The robotic mission plans to spend a year idling in an orbital backroad before heading to its target destination... [R]ather than turning toward Mars, the two orbiters will instead aim for Lagrange Point 2, or L2 — a cosmic balance point about 1.5 million kilometers (930,000 miles) from Earth. Lagrange points are special because they act as gravitational wells in which the pull of the sun and Earth are in perfect balance. The conditions can allow spacecraft to linger without being dragged away... The spacecraft will then loop endlessly in a kidney bean-shaped orbit around L2 until next year's Mars transfer window opens. This "launch and loiter" project is part of NASA's SIMPLEx [Small, Innovative Missions for Planetary Exploration] program, which seekshigh-value missions for less money, notes CNN. "EscaPADE's cost was less than $100 million, compared with the roughly $300 million to $600 million price tags of other NASA satellites orbiting Mars." "Blue Origin is also attempting to land and recover New Glenn's first-stage booster," notes another CNN article.


    Read more of this story at Slashdot.


  • 'AI Slop' in Court Filings: Lawyers Keep Citing Fake AI-Hallucinated Cases
    "According to court filings and interviews with lawyers and scholars, the legal profession in recent months has increasingly become a hotbed for AI blunders," reports the New York Times: Earlier this year, a lawyer filed a motion in a Texas bankruptcy court that cited a 1985 case called Brasher v. Stewart. Only the case doesn't exist. Artificial intelligence had concocted that citation, along with 31 others. A judge blasted the lawyer in an opinion, referring him to the state bar's disciplinary committee and mandating six hours of A.I. training. That filing was spotted by Robert Freund, a Los Angeles-based lawyer, who fed it to an online database that tracks legal A.I. misuse globally. Mr. Freund is part of a growing network of lawyers who track down A.I. abuses committed by their peers, collecting the most egregious examples and posting them online. The group hopes that by tracking down the A.I. slop, it can help draw attention to the problem and put an end to it... [C]ourts are starting to map out punishments of small fines and other discipline. The problem, though, keeps getting worse. That's why Damien Charlotin, a lawyer and researcher in France, started an online database in April to track it. Initially he found three or four examples a month. Now he often receives that many in a day. Many lawyers... have helped him document 509 cases so far. They use legal tools like LexisNexis for notifications on keywords like "artificial intelligence," "fabricated cases" and "nonexistent cases." Some of the filings include fake quotes from real cases, or cite real cases that are irrelevant to their arguments. The legal vigilantes uncover them by finding judges' opinions scolding lawyers... Court-ordered penalties "are not having a deterrent effect," said Freund, who has publicly flagged more than four dozen examples this year. "The proof is that it continues to happen."


    Read more of this story at Slashdot.


  • Lost Unix v4 Possibly Recovered on a Forgotten Bell Labs Tape From 1973
    "A tape-based piece of unique Unix history may have been lying quietly in storage at the University of Utah for 50+ years," reports The Register. And the software librarian at Silicon Valley's Computer History Museum, Al Kossow of Bitsavers, believes the tape "has a pretty good chance of being recoverable."Long-time Slashdot reader bobdevine says the tape will be analyzed at the Computer History Museum. More from The Register:The news was posted to Mastodon by Professor Robert Ricci of the University of Utah's Kahlert School of Computing [along with a picture. "While cleaning a storage room, our staff found this tape containing #UNIX v4 from Bell Labs, circa 1973..." Ricci posted on Mastodon. "We have arranged to deliver it to the Computer History Museum."] The nine-track tape reel bears a handwritten label reading: UNIX Original From Bell Labs V4 (See Manual for format)... If it's what it says on the label, this is a notable discovery because little of UNIX V4 remains. That's unfortunate as this specific version is especially interesting: it's the first version of UNIX in which the kernel and some of the core utilities were rewritten in the new C programming language. Until now, the only surviving parts known were the source code to a slightly older version of the kernel and a few man pages — plus the Programmer's Manual [PDF], from November 1973. The Unix Heritage Society hosts those surviving parts — and apparently some other items of interest, according to a comment posted on Mastodon. "While going through the tapes from Dennis Ritchie earlier this year, I found some UNIX V4 distribution documents," posted Mastodon user "Broken Pipe," linking to tuhs.org/Archive/Applications/Dennis_Tapes/Gao_Analysis/v4_dist/. There's a file called license ("The program and information transmitted herewith is and shall remain the property of Bell Lab%oratories...") and coldboot ("Mount good tape on drive 0..."), plus a six-page "Setup" document that ends with these words... We expect to have a UNIX seminar early in 1974. Good luck.Ken ThompsonDennis RitchieBell Telephone LabsMurray Hill, NJ 07974


    Read more of this story at Slashdot.


The Register

  • De-duplicating the desktops: Let's come together, right now
    Here come old FlatPak, it comes grooving up slowly...
    Comment The tendency of Linux developers to reinvent wheels is no secret. It's not so much the elephant in the room, as the entire jet-propelled guided ark ship full of every known and unknown member of the Proboscidea from Ambelodon to Stegodon via deinotheres, elephants, mammoths and other mastodons.…



  • Three most important factors in enterprise IT: control, control, control
    We’re all out of it. How to get it back is an open secret
    Opinion When the first generation of microcomputers landed on desktops, they promised many things. Affordability, flexibility, efficiency, all the good things still selling IT to this day. Mostly, though, they offered control.…




  • Techie ran up $40,000 bill trying to download a driver
    In the dialup age, small mistakes could cost big money
    Who, Me? Welcome to another week in the world of work, and therefore also to another edition of Who, Me? It’s The Register’s Monday reader-contributed column in which you admit to the error of your ways.…




  • Louvre's pathetic passwords belong in a museum, just not that one
    PLUS: CISA layoffs continue; Lawmakers criticize camera security; China to execute scammers; And more
    Infosec in brief There's no indication that the brazen bandits who stole jewels from the Louvre attacked the famed French museum's systems, but had they tried, it would have been incredibly easy.…



Polish Linux

  • Security: Why Linux Is Better Than Windows Or Mac OS
    Linux is a free and open source operating system that was released in 1991 developed and released by Linus Torvalds. Since its release it has reached a user base that is greatly widespread worldwide. Linux users swear by the reliability and freedom that this operating system offers, especially when compared to its counterparts, windows and [0]


  • Essential Software That Are Not Available On Linux OS
    An operating system is essentially the most important component in a computer. It manages the different hardware and software components of a computer in the most effective way. There are different types of operating system and everything comes with their own set of programs and software. You cannot expect a Linux program to have all [0]


  • Things You Never Knew About Your Operating System
    The advent of computers has brought about a revolution in our daily life. From computers that were so huge to fit in a room, we have come a very long way to desktops and even palmtops. These machines have become our virtual lockers, and a life without these network machines have become unimaginable. Sending mails, [0]


  • How To Fully Optimize Your Operating System
    Computers and systems are tricky and complicated. If you lack a thorough knowledge or even basic knowledge of computers, you will often find yourself in a bind. You must understand that something as complicated as a computer requires constant care and constant cleaning up of junk files. Unless you put in the time to configure [0]


  • The Top Problems With Major Operating Systems
    There is no such system which does not give you any problems. Even if the system and the operating system of your system is easy to understand, there will be some times when certain problems will arise. Most of these problems are easy to handle and easy to get rid of. But you must be [0]


  • 8 Benefits Of Linux OS
    Linux is a small and a fast-growing operating system. However, we can’t term it as software yet. As discussed in the article about what can a Linux OS do Linux is a kernel. Now, kernels are used for software and programs. These kernels are used by the computer and can be used with various third-party software [0]


  • Things Linux OS Can Do That Other OS Cant
    What Is Linux OS?  Linux, similar to U-bix is an operating system which can be used for various computers, hand held devices, embedded devices, etc. The reason why Linux operated system is preferred by many, is because it is easy to use and re-use. Linux based operating system is technically not an Operating System. Operating [0]


  • Packagekit Interview
    Packagekit aims to make the management of applications in the Linux and GNU systems. The main objective to remove the pains it takes to create a system. Along with this in an interview, Richard Hughes, the developer of Packagekit said that he aims to make the Linux systems just as powerful as the Windows or [0]


  • What’s New in Ubuntu?
    What Is Ubuntu? Ubuntu is open source software. It is useful for Linux based computers. The software is marketed by the Canonical Ltd., Ubuntu community. Ubuntu was first released in late October in 2004. The Ubuntu program uses Java, Python, C, C++ and C# programming languages. What Is New? The version 17.04 is now available here [0]


  • Ext3 Reiserfs Xfs In Windows With Regards To Colinux
    The problem with Windows is that there are various limitations to the computer and there is only so much you can do with it. You can access the Ext3 Reiserfs Xfs by using the coLinux tool. Download the tool from the  official site or from the  sourceforge site. Edit the connection to “TAP Win32 Adapter [0]


OSnews

  • Ironclad 0.7.0 and 0.8.0 released, adds RISC-V support
    Weve talked about Ironclad a few times, but theres been two new releases since the 0.6.0 release we covered last, so lets see what the projects been up to. As a refresher, Ironclad is a formally verified, hard real-time capable kernel written in SPARK and Ada. Versions 0.7.0 and 0.8.0 improved support for block device caching, added a basic NVMe driver, added support for x86’s SMAP, switched from KVM to NVMM for Ironclad’s virtualization interface, and much, much more. In the meantime, Ironclad also added support for RISC-V, making it usable on any 64 bit RISC-V target that supports a Limine-protocol compatible bootloader. The easiest way to try out Ironclad is to download Gloire, a distribution that uses Ironclad and the GNU tools. It can be installed in both a virtual machine and on real hardware.


  • Mac OS 7.6 and 8 for CHRP releases discovered
    For those of us unaware  unlikely on OSNews, but still  for a hot minute in the second half of the 90s, Apple licensed its Mac OS to OEMs, resulting in officially sanctioned Mac clones from a variety of companies. While intended to grow the Macs market share, what ended up happening instead is that the clone makers outcompeted Apple on performance, price, and features, with clones offering several features and capabilities before Apple did  for far lower prices. When Steve Jobs returned to Apple, he killed the clone program almost instantly. The rather abrupt end of the clone program means theres a number of variants of the Mac OS that never made their way into the market, most notable variants intended for the Common Reference Hardware Platform, or CHRP, a standard defined by IBM and Apple for PowerPC-based PCs. Thanks to the popular classic Mac YouTuber Mac84, we now have a few of these releases out in the wild. These CDs contain release candidates for Mac OS 7.6 and Mac OS 8 for CHRP (Common Hardware Reference Platform) systems. They were created to support CHRP computers, but were never released, likely due to Steve Jobs returning to Apple in September 1997 and eliminating the Mac Clone program and any CHRP efforts. ↫ Mac OS 7.6/8 CHRP releases page Mac84 has an accompanying video diving into more detail about these individual releases by booting and running them in an emulator, so we can get a better idea of what they contain. While most clone makers only got access to Mac OS 7.x, some of them did, in fact, gain access to Mac OS 8, namely UMAX and Power Computing (the latter of which was acquired by Apple). Its not the clone nature of these releases that make them special, but the fact theyre CHRP releases is. This reference platform was a failure in the market, and only a few of IBMs own machines and some of Motorolas PowerStack machines properly supported it. Apple, meanwhile, only aid minor lip service to CHRP in its New World Power Macintosch machines.


  • FreeBSD now builds reproducibly and without root privilege
    The FreeBSD Foundation is pleased to announce that it has completed work to build FreeBSD without requiring root privilege. We have implemented support for all source release builds to use no-root infrastructure, eliminating the need for root privileges across the FreeBSD release pipeline. This work was completed as part of the`program commissioned by the Sovereign Tech Agency. ↫ FreeBSD Foundation blog This is great news in and of itself, but theres more: FreeBSD has also improved build reproducability. This means that given the same source input, you should end up with the same binary output, which is an important part of building a verifiable chain of trust. These two improvements combined further add to making FreeBSD a trustworthy, secure option  something it already is anyway. In case you havent noticed, the FreeBSD project and its countless contributors are making a ton of tangible progress lately on a wide variety of topics, from improving desktop use, to solidifying Wi-Fi support, to improving the chain of trust. I think the time is quite right for FreeBSD to make some inroads in the desktop UNIX-y space, especially for people to whom desktop Linux has strayed too far from the traditional UNIX philosphy (whatever that means).


  • LXQt 2.3.0 released
    LXQt, the other Qt desktop environment, released version 2.3.0. This new version comes roughly six months after 2.2.0, and continues the projects adoption of Wayland. The enhancement of Wayland support has been continued, especially in LXQt Panel, whose Desktop Switcher is now enabled for Labwc, Niri, …. It is also equipped with a backend specifically for Wayfire. In addition, the Custom Command plugin is made more flexible, regardless of Wayland and X11. ↫ LXQt 2.3.0 release announcement The screenshot utility has been improved as well, and lxqt-qdbus has been added to lxqt-wayland-session to make qdbus commands easier to use with all kinds of Wayland compositors.


  • WINE gaming in FreeBSD Jails with Bastille
    FreeBSD offers a whole bunch of technologies and tools to make gaming on the platform a lot more capable than youd think, and this article by Pertho dives into the details. Running all your games inside a FreeBSD Jail with Wine installed into it is pretty neat. Initially, I thought this was going to be a pretty difficult and require a lot of trial and error but I was surprised at how easy it was to get this all working. I was really happy to get some of my favorite games working in a FreeBSD Jail, and having ZFS snapshots around was a great way to test things in case I needed to backtrack. ↫ Pertho at their blog No, this isnt as easy as gaming on Linux has become, and it certainly requires a ton more work and knowledge than just installing a major Linux distribution and Steam, but for those of us who prefer a more traditional UNIX-like experience, this is a great option.


  • Tape containing UNIX v4 found
    A unique and very important find at the University of Utah: while cleaning out some storage rooms, the staff at the university discovered a tape containing a copy of UNIX v4 from Bell Labs. At this time, no complete copies are known to exist, and as such, this could be a crucial find for the archaeology of early UNIX. The tape in question will be sent to the Computer History Museum for further handling, where bitsavers.org will conduct the recovery process. I have the equipment. It is a 3M tape so it will probably be fine. It will be digitized on my analog recovery set up and Ill use Len Shusteks readtape program to recover the data. The only issue right now is my workflow isnt a while you wait! thing, so I need to pull all the pieces into one physical location and test everything before I tell Penny its OK to come out. ↫ bitsavers.org Its amazing how we still manage to find such treasures in nooks and crannies all over the world, and with everything looking good so far, it seems well soon be able to fill in more of UNIX early history.


  • There is no such thing as a 3.5 inch floppy disk
    Wait, what? The term`3.5 inch floppy disc`is in fact a misnomer. Whilst the specification for 5.25 inch floppy discs employs Imperial units, the later specification for the smaller floppy discs employs metric units. The standards for these discs are all of which specify the measurements in metric, and only metric. These standards explicitly give the dimensions as 90.0mm by 94.0mm. Its in clause 6 of all three. ↫ Jonathan de Boyne Pollard Even the applicable standard in the US, ANSI X3.171-1989, specifies the size in metric. We couldve been referring to these things using proper measurements instead of archaic ones based on the size of a monks left testicle at dawn at room temperature in 1375 or whatever nonsense imperial or customary used to be based on. I feel dirty for thinking I had to use inches! for this. If we ever need to talk about these disks on OSNews from here on out, Ill be using proper units of measurement.


  • Servo ported to Redox
    Redox keeps improving every month, and this past one is certainly a banger. The big news this past month is that Servo, the browser engine written in Rust, has been ported to Redox. Its extremely spartan at the moment, and crashes when a second website is loaded, but its a promising start. It also just makes sense to have the premier Rust browser engine running on the premier Rust operating system. Htop and bottom have been ported to Redox for much improved system monitoring, and theyre joined by a port of GoAccess. The version of Rust has been updated which fixed some issues, and keyboard layout configuration has been greatly improved. Instead of a few hardcoded layouts, they can now be configured dynamically for users of PS/2 keyboards, with USB keyboards receiving this functionality soon as well. Theres more, of course, as well as the usual slew of low-level changes and improvements to drivers, the kernel relibc, and more.


  • MacOS 26’s new icons are a step backwards
    On the new MacOS 26 (Tahoe), Apple has mandated that all application icons fit into their prescribed squircle. No longer can icons have distinct shapes, nor even any fun frame-breaking accessories. Should an icon be so foolish as to try to have a bit of personality, it will find itself stuffed into a dingy gray icon jail. ↫ Paul Kafasis The downgraded icons listed in this article are just0 Sad. While theres no accounting for tastes, Apples new glassy icons are just plain bad, void of any whimsy, and lacking in artistry. Considering where Apple came from back when it made beautifully crafted icons that set the bar for the entire industry. Almost seems like a metaphor for tech in general.


  • A lost IBM PC/AT model? Analyzing a newfound old BIOS
    Some people not only have a very particular set of skills, but also a very particular set of interests that happen to align with those skills perfectly. When several unidentified and mysterious IBM PC ROM chips from the 1980s were discovered on eBay, two particular chips dumped contents posed particularly troublesome to identify. In 1985, the FCh model byte could only mean the 5170 (PC/AT), and the even/odd byte interleaving does point at a 16-bit bus. But there are three known versions of the PC/AT BIOS released during the 5170 familys lifetime, corresponding to the three AT motherboard types. This one here is clearly not one of them: its date stamps and part numbers dont match, and the actual contents are substantially different besides. My first thought was that this may have come from one of those more shadowy members of the 5170 family: perhaps the AT/370, the 3270 AT/G(X), or the rack-mounted 7532 Industrial AT. But known examples of those carry the same firmware sets as the plain old 5170, so their BIOS extensions (if any) came in the shape of extra adapter ROMs. Whatever`this`thing was  some other 5170-type machine, a prototype, or even just a custom patch  it seemed Id have to inquire within for any further clues. ↫ VileR at the int10h.org blog Ill be honest and state that most of the in-depth analysis of the code dumped from the ROM chips is far too complex for me to follow, but that doesnt make the story it tells any less interesting. Theres no definitive, 100% conclusive answer at the end, but the available evidence collected by VileR does make a very strong case for a very specific, mysterious variant of the IBM PC being the likely source of the ROMs. If youre interested in some very deep IBM lore, heres your serving.


Linux Journal - The Original Magazine of the Linux Community

  • The Most Critical Linux Kernel Breaches of 2025 So Far
    by George Whittaker
    The Linux kernel, foundational for servers, desktops, embedded systems, and cloud infrastructure, has been under heightened scrutiny. Several vulnerabilities have been exploited in real-world attacks, targeting critical subsystems and isolation layers. In this article, we’ll walk through major examples, explain their significance, and offer actionable guidance for defenders.
    CVE-2025-21756 – Use-After-Free in the vsock Subsystem
    One of the most alarming flaws this year involves a use-after-free vulnerability in the Linux kernel’s vsock implementation (Virtual Socket), which enables communication between virtual machines and their hosts.

    How the exploit works:A malicious actor inside a VM (or other privileged context) manipulates reference counters when a vsock transport is reassigned. The code ends up freeing a socket object while it’s still in use, enabling memory corruption and potentially root-level access.

    Why it matters:Since vsock is used for VM-to-host and inter-VM communication, this flaw breaks a key isolation barrier. In multi-tenant cloud environments or container hosts that expose vsock endpoints, the impact can be severe.

    Mitigation:Kernel maintainers have released patches. If your systems run hosts, hypervisors, or other environments where vsock is present, make sure the kernel is updated and virtualization subsystems are patched.
    CVE-2025-38236 – Out-of-Bounds / Sandbox Escape via UNIX Domain Sockets
    Another high-impact vulnerability involves the UNIX domain socket interface and the MSG_OOB flag. The bug was publicly detailed in August 2025 and is already in active discussion.

    Attack scenario:A process running inside a sandbox (for example a browser renderer) can exploit MSG_OOB operations on a UNIX domain socket to trigger a use-after-free or out-of-bounds read/write. That allows leaking kernel pointers or memory and then chaining to full kernel privilege escalation.

    Why it matters:This vulnerability is especially dangerous because it bridges from a low-privilege sandboxed process to kernel-level compromise. Many systems assume sandboxed code is safe; this attack undermines that assumption.

    Mitigation:Distributions and vendors (like browser teams) have disabled or restricted MSG_OOB usage for sandboxed contexts. Kernel patches are available. Systems that run browser sandboxes or other sandboxed processes need to apply these updates immediately.
    CVE-2025-38352 – TOCTOU Race Condition in POSIX CPU Timers
    In September 2025, the U.S. Cybersecurity & Infrastructure Security Agency (CISA) added this vulnerability to its Known Exploited Vulnerabilities (KEV) catalog.
    Go to Full Article


  • Steam Deck 2 Rumors Ignite a New Era for Linux Gaming
    by George Whittaker
    The speculation around a successor to the Steam Deck has stirred renewed excitement, not just for a new handheld, but for what it signals in Linux-based gaming. With whispers of next-gen specs, deeper integration of SteamOS, and an evolving handheld PC ecosystem, these rumors are fueling broader hopes that Linux gaming is entering a more mature age. In this article we look at the existing rumors, how they tie into the Linux gaming landscape, why this matters, and what to watch.
    What the Rumours Suggest
    Although Valve has kept things quiet, multiple credible outlets report about the Steam Deck 2 being in development and potentially arriving well after 2026. Some of the key tid-bits:

    Editorials note that Valve isn’t planning a mere spec refresh; it wants a “generational leap in compute without sacrificing battery life”.

    A leaked hardware slide pointed to an AMD “Magnus”-class APU built on Zen 6 architecture being tied to next-gen handhelds, including speculation about the Steam Deck 2.

    One hardware leaker (KeplerL2) cited a possible 2028 launch window for the Steam Deck 2, which would make it roughly 6 years after the original.

    Valve’s own design leads have publicly stated that a refresh with only 20-30% more performance is “not meaningful enough”, implying they’re waiting for a more substantial upgrade.

    In short: while nothing is official yet, there’s strong evidence that Valve is working on the next iteration and wants it to be a noteworthy jump, not just a minor update.
    Why This Matters for Linux Gaming
    The rumoured arrival of the Steam Deck 2 isn’t just about hardware, it reflects and could accelerate key inflection points for Linux & gaming:
    Validation of SteamOS & Linux Gaming
    The original Steam Deck, running SteamOS (a Linux-based OS), helped prove that PC gaming doesn’t always require Windows. A well-received successor would further validate Linux as a first-class gaming platform, not a niche alternative but a mainstream choice.
    Handheld PC Ecosystem Momentum
    Since the first Deck, many Windows-based handhelds have entered the market (such as the ROG Ally, Lenovo Legion Go). Rumours of the Deck 2 keep spotlight on the form factor and raise expectations for Linux-native handhelds. This momentum helps encourage driver, compatibility and OS investments from the broader community.
    Go to Full Article


  • Kali Linux 2025.3 Lands: Enhanced Wireless Capabilities, Ten New Tools & Infrastructure Refresh
    by George Whittaker Introduction
    The popular penetration-testing distribution Kali Linux has dropped its latest quarterly snapshot: version 2025.3. This release continues the tradition of the rolling-release model used by the project, offering users and security professionals a refreshed toolkit, broader hardware support (especially wireless), and infrastructure enhancements under the hood. With this update, the distribution aims to streamline lab setups, bolster wireless hacking capabilities (particularly on Raspberry Pi devices), and integrate modern workflows including automated VMs and LLM-based tooling.

    In this article, we’ll walk through the key highlights of Kali Linux 2025.3, how the changes affect users (both old and new), the upgrade path, and what to keep in mind for real-world deployment.
    What’s New in Kali Linux 2025.3
    This snapshot from the Kali team brings several categories of improvements: tooling, wireless/hardware support, architecture changes, virtualization/image workflows, UI and plugin tweaks. Below is a breakdown of the major updates.
    Tooling Additions: Ten Fresh Packages
    One of the headline items is the addition of ten new security tools to the Kali repositories. These tools reflect shifts in the field, toward AI-augmented recon, advanced wireless simulation and pivoting, and updated attack surface coverage. Among the additions are:

    Caido and Caido-cli – a client-server web-security auditing toolkit (graphical client + backend).

    Detect It Easy (DiE) – a utility for identifying file types, a useful tool in reverse engineering workflows.

    Gemini CLI – an open-source AI agent that integrates Google’s Gemini (or similar LLM) capabilities into the terminal environment.

    krbrelayx – a toolkit focused on Kerberos relaying/unconstrained delegation attacks.

    ligolo-mp – a multiplayer pivoting solution for network-lateral movement.

    llm-tools-nmap – allows large-language-model workflows to drive Nmap scans (automated/discovery).

    mcp-kali-server – configuration tooling to connect an AI agent to Kali infrastructure.

    patchleaks – a tool that detects security-fix patches and provides detailed descriptions (useful both for defenders and auditors).

    vwifi-dkms – enables creation of “dummy” Wi-Fi networks (virtual wireless interfaces) for advanced wireless testing and hacking exercises.
    Go to Full Article


  • VMScape: Cracking VM-Host Isolation in the Speculative Execution Age & How Linux Patches Respond
    by George Whittaker Introduction
    In the world of modern CPUs, speculative execution, where a processor guesses ahead on branches and executes instructions before the actual code path is confirmed, has long been recognized as a performance booster. However, it has also given rise to a class of vulnerabilities collectively known as “Spectre” attacks, where microarchitectural side states (such as the branch target buffer, caches, or predictor state) are mis-exploited to leak sensitive data.

    Now, a new attack variant, dubbed VMScape, exposes a previously under-appreciated weakness: the isolation between a guest virtual machine and its host (or hypervisor) in the branch predictor domain. In simpler terms: a malicious VM can influence the CPU’s branch predictor in such a way that when control returns to the host, secrets in the host or hypervisor can be exposed. This has major implications for cloud security, virtualization environments, and kernel/hypervisor protections.

    In this article we’ll walk through how VMScape works, the CPUs and environments it affects, how the Linux kernel and hypervisors are mitigating it, and what users, cloud operators and admins should know (and do).
    What VMScape Is & Why It MattersThe Basics of Speculative Side-Channels
    Speculative execution vulnerabilities like Spectre exploit the gap between architectural state (what the software sees as completed instructions) and microarchitectural state (what the CPU has done internally, such as cache loads, branch predictor updates, etc). Even when speculative paths are rolled back architecturally, side-effects in the microarchitecture can remain and be probed by attackers.

    One of the original variants, Spectre-BTI (Branch Target Injection, also called Spectre v2) leveraged the Branch Target Buffer (BTB) / predictor to redirect speculative execution along attacker-controlled paths. Over time, hardware and software mitigations (IBRS, eIBRS, IBPB, STIBP) have been introduced. But VMScape shows that when virtualization enters the picture, the isolation assumptions break down.
    VMScape: Guest to Host via Branch Predictor
    VMScape (tracked as CVE‑2025‑40300) is described by researchers from ETH Zürich as “the first Spectre-based end-to-end exploit in which a malicious guest VM can leak arbitrary sensitive information from the host domain/hypervisor, without requiring host code modifications and in default configuration.”

    Here are the key elements making VMScape significant:

    The attack is cross-virtualization: a guest VM influences the host’s branch predictor state (not just within the guest).
    Go to Full Article


  • Self-Tuning Linux Kernels: How LLM-Driven Agents Are Reinventing Scheduler Policies
    by George Whittaker Introduction
    Modern computing systems rely heavily on operating-system schedulers to allocate CPU time fairly and efficiently. Yet many of these schedulers operate blindly with respect to the meaning of workloads: they cannot distinguish, for example, whether a task is latency-sensitive or batch-oriented. This mismatch, between application semantics and scheduler heuristics, is often referred to as the semantic gap.

    A recent research framework called SchedCP aims to close that gap. By using autonomous LLM‐based agents, the system analyzes workload characteristics, selects or synthesizes custom scheduling policies, and safely deploys them into the kernel, without human intervention. This represents a meaningful step toward self-optimizing, application-aware kernels.

    In this article we will explore what SchedCP is, how it works under the hood, the evidence of its effectiveness, real-world implications, and what caveats remain.
    Why the Problem Matters
    At the heart of the issue is that general-purpose schedulers (for example the Linux kernel’s default policy) assume broad fairness, rather than tailoring scheduling to what your application cares about. For instance:

    A video-streaming service may care most about minimal tail latency.

    A CI/CD build system may care most about throughput and job completion time.

    A cloud analytics job may prefer maximum utilisation of cores with less concern for interactive responsiveness.

    Traditional schedulers treat all tasks mostly the same, tuning knobs generically. As a result, systems often sacrifice optimisation opportunities. Some prior efforts have used reinforcement-learning techniques to tune scheduler parameters, but these approaches have limitations: slow convergence, limited generalisation, and weak reasoning about why a workload behaves as it does.

    SchedCP starts from the observation that large language models can reason semantically about workloads (expressed in plain language or structured summaries), propose new scheduling strategies, and generate code via eBPF that is loaded into the kernel via the sched_ext interface. Thus, a custom scheduler (or modified policy) can be developed specifically for a given workload scenario, and in a self-service, automated way.
    Architecture & Key Components
    SchedCP comprises two primary subsystems: a control-plane framework and an agent loop that interacts with it. The framework decouples “what to optimise” (reasoning) from “how to act” (execution) in order to preserve kernel stability while enabling powerful optimisations.

    Here are the major components:
    Go to Full Article


  • Bcachefs Ousted from Mainline Kernel: The Move to DKMS and What It Means
    by George Whittaker Introduction
    After years of debate and development, bcachefs—a modern copy-on-write filesystem once merged into the Linux kernel—is being removed from mainline. As of kernel 6.17, the in-kernel implementation has been excised, and future use is expected via an out-of-tree DKMS module. This marks a turning point for the bcachefs project, raising questions about its stability, adoption, and relationship with the kernel development community.

    In this article, we’ll explore the background of bcachefs, the sequence of events leading to its removal, the technical and community dynamics involved, and implications for users, distributions, and the filesystem’s future.
    What Is Bcachefs?
    Before diving into the removal, let’s recap what bcachefs is and why it attracted attention.

    Origin & goals: Developed by Kent Overstreet, bcachefs emerged from ideas in the earlier bcache project (a block-device caching layer). It aimed to build a full-featured, general-purpose filesystem combining performance, reliability, and modern features (snapshots, compression, encryption) in a coherent design.

    Mainline inclusion: Bcachefs was merged into the mainline kernel in version 6.7 (released January 2024) after a lengthy review and incubation period.

    “Experimental” classification: Even after being part of the kernel, bcachefs always carried disclaimers about its maturity and stability—they were not necessarily recommends for production use by all users.

    Its presence in mainline gave distributions a path to ship it more casually, and users had easier access without building external modules—an important convenience for adoption.
    What Led to the Removal
    The excision of bcachefs from the kernel was not sudden but the culmination of tension over development practices, patch acceptance timing, and upstream policy norms.
    “Externally Maintained” status in 6.17
    In kernel 6.17’s preparation, maintainers marked bcachefs as “externally maintained.” Though the code remained present, the change signified that upstream would no longer accept new patches or updates within the kernel tree.

    This move allowed a transitional period. The code was “frozen” inside the tree to avoid breaking existing systems immediately, while preparation was made for future removal.
    Go to Full Article


  • Linux Mint 22.2 ‘Zara’ Released: Polished, Modern, and Built for Longevity
    by George Whittaker Introduction
    The Linux Mint team has officially unveiled Linux Mint 22.2, codenamed “Zara”, on September 4, 2025. As a Long-Term Support (LTS) release, Zara will receive updates through 2029, promising users stability, incremental improvements, and a comfortable desktop experience.

    This version is not about flashy overhauls; rather, it’s about refinement — applying polish to existing features, smoothing rough edges, weaving in new conveniences (like fingerprint login), and improving compatibility with modern hardware. Below, we’ll delve into what’s new in Zara, what users should know before upgrading, and how it continues Mint’s philosophy of combining usability, reliability, and elegance.
    What’s New in Linux Mint 22.2 “Zara”
    Here’s a breakdown of key changes, refinements, and enhancements in Zara.
    Base, Support & Kernel Stack
    Ubuntu 24.04 (Noble) base: Zara continues to use Ubuntu 24.04 as its upstream base, ensuring broad package compatibility and long-term security support.

    Kernel 6.14 (HWE): The default kernel for new installations is 6.14, bringing support for newer hardware.

    However — for existing systems upgraded from Mint 22 or 22.1 — the older kernel (6.8 LTS) remains the default, because 6.14’s support window is shorter.

    Zara is an LTS edition, with security updates and maintenance promised through 2029.
    Major Features & EnhancementsFingerprint Authentication via Fingwit
    Zara introduces a first-party tool called Fingwit to manage fingerprint-based authentication. With compatible hardware and support via the libfprint framework, users can:

    Enroll fingerprints

    Use fingerprint login for the screensaver

    Authenticate sudo commands

    Launch administrative tools via pkexec using the fingerprint

    In some cases, bypass password entry at login (unless home directory encryption or keyring constraints force password fallback)

    It is important to note that fingerprint login on the actual login screen may be disabled or limited depending on encryption or keyring usage; in those cases, the system falls back to password entry.
    UI & Theming Refinements
    Sticky Notes app now sports rounded corners, improved Wayland compatibility, and a companion Android app named StyncyNotes (available via F-Droid) to sync notes across devices.
    Go to Full Article


  • Ubuntu Update Backlog: How a Brief Canonical Outage Cascaded into Multi-Day Delays
    by George Whittaker Introduction
    In early September 2025, Ubuntu users globally experienced disruptive delays in installing updates and new packages. What seemed like a fleeting outage—only about 36 minutes of server downtime—triggered a cascade of effects: mirrors lagging, queued requests overflowing, and installations hanging for days. The incident exposed how fragile parts of Ubuntu’s update infrastructure can be under sudden load.

    In this article, we’ll walk through what happened, why the fallout was so severe, how Canonical responded, and lessons for users and infrastructure architects alike.
    What Happened: Outage & Immediate Impact
    On September 5, 2025, Canonical’s archive servers—specifically archive.ubuntu.com and security.ubuntu.com—suffered an unplanned outage. The status page for Canonical showed the incident lasting roughly 36 minutes, after which operations were declared “resolved.”

    However, that brief disruption set off a domino effect. Because the archives and security servers serve as the central hubs for Ubuntu’s package ecosystem, any downtime causes massive backlog among mirror servers and client requests. Mirrors found themselves out of sync, processing queues piled up, and users attempting updates or new installs encountered failed downloads, hung operations, or “404 / package not found” errors.

    On Ubuntu’s community forums, Canonical acknowledged that while the server outage was short, the upload / processing queue for security and repository updates had become “obscenely” backlogged. Users were urged to be patient, as there was no immediate workaround.

    Throughout September 5–7, users continued reporting incomplete or failed updates, slow mirror responses, and installations freezing mid-process. Even newly provisioning systems faced broken repos due to inconsistent mirror states.

    By September 8, the situation largely stabilized: mirrors caught up, package availability resumed, and normal update flows returned. But the extended period of degraded service had already left many users frustrated.
    Why a Short Outage Turned into Days of Disruption
    At first blush, 36 minutes seems trivial. Why did it have such prolonged consequences? Several factors contributed:

    Centralized repository backplane Ubuntu’s infrastructure is architected around central canonical repositories (archive, security) which then propagate to mirrors worldwide. When the central system is unavailable, mirrors stop receiving updates and become stale.
    Go to Full Article


  • Bringing Desktop Linux GUIs to Android: The Next Step in Graphical App Support
    by George Whittaker Introduction
    Android has long been focused on running mobile apps, but in recent years, features aimed at developers and power users have begun pushing its boundaries. One exciting frontier: running full Linux graphical (GUI) applications on Android devices. What was once a novelty is now gradually becoming more viable, and recent developments point toward much smoother, GPU-accelerated Linux GUI experiences on Android.

    In this article, we’ll trace how Linux apps have run on Android so far, explain the new architecture changes enabling GPU rendering, showcase early demonstrations, discuss remaining hurdles, and look at where this capability is headed.
    The State of Linux on Android TodayThe Linux Terminal App
    Google’s Linux Terminal app is the core interface for running Linux environments on Android. It spins up a virtual machine (VM), often booting Debian or similar, and lets users enter a shell, install packages, run command-line tools, etc.

    Initially, the app was limited purely to text / terminal-based Linux programs; graphical apps were not supported meaningfully. More recently, Google introduced support for launching GUI Linux applications in experimental channels.
    Limitations: Rendering & Performance
    Even now, most GUI Linux apps on Android are rendered in software, that is, all drawing happens on the CPU (via a software renderer) rather than using the device’s GPU. This leads to sluggish UI, high CPU usage, more thermal stress, and shorter battery life.

    Because of these limitations, running heavy GUI apps (graphics editors, games, desktop-level toolkits) has been more experimental than practical.
    What’s Changing: GPU-Accelerated Rendering
    The big leap forward is moving from CPU rendering to GPU-accelerated rendering, letting the device’s graphics hardware do the heavy lifting.
    Lavapipe (Current Baseline)
    At present, the Linux VM uses Lavapipe (a Mesa software rasterizer) to interpret GPU API calls on the CPU. This works, but is inefficient, especially for complex GUIs or animations.
    Introducing gfxstream
    Google is planning to integrate gfxstream into the Linux Terminal app. gfxstream is a GPU virtualization / forwarding technology: rather than reinterpreting graphics calls in software, it forwards them from the guest (Linux VM) to the host’s GPU directly. This avoids CPU overhead and enables near-native rendering speeds.
    Go to Full Article


  • Fedora 43 Beta Released: A Preview of What's Ahead
    by George Whittaker Introduction
    Fedora’s beta releases offer one of the earliest glimpses into the next major version of the distribution — letting users and developers poke, test, and report issues before the final version ships. With Fedora 43 Beta, released on September 16, 2025, the community begins the final stretch toward the stable Fedora 43.

    This beta is largely feature-complete: developers hope it will closely match what the final release looks like (barring last-minute fixes). The goal is to surface regression bugs, UX issues, and compatibility problems before Fedora 43 is broadly adopted.
    Release & Availability
    The Fedora Project published the beta across multiple editions and media — Workstation, KDE Plasma, Server, IoT, Cloud, and spins/labs where applicable. ISO images are available for download from the official Fedora servers.

    Users already running Fedora 42 can upgrade via the DNF system-upgrade mechanism. Some spins (e.g. Mate or i3) are not fully available across all architectures yet.

    Because it’s a beta, users should be ready to encounter bugs. Fedora encourages testers to file issues via the QA mailing list or Fedora’s issue tracking infrastructure.
    Major New Features & Changes
    Fedora 43 Beta brings many updates under the hood — some in visible user features, others in core tooling and system behavior.
    Kernel, Desktop & Session Updates
    Fedora 43 Beta is built on Linux kernel 6.17.

    The Workstation edition features GNOME 49.

    In a bold shift, Fedora removes GNOME X11 packages for the Workstation, making Wayland-only the default and only session for GNOME. Existing users are migrated to Wayland.

    On KDE, Fedora 43 Beta ships with KDE Plasma 6.4 in the Plasma edition.
    Installer & Package Management
    Fedora’s Anaconda installer gets a WebUI by default for all Spins, providing a more unified and modern install experience across desktop variants.

    The installer now uses DNF5 internally, phasing out DNF4 which is now in maintenance mode.

    Auto-updates are enabled by default in Fedora Kinoite, ensuring that systems apply updates seamlessly in the background with minimal user intervention.
    Programming & Core Tooling Updates
    The Python version in Fedora 43 Beta moves to 3.14, an early adoption to catch bugs before the upstream release.
    Go to Full Article


Page last modified on November 02, 2011, at 10:01 PM