Recent Changes - Search:
NTLUG

Linux is free.
Life is good.

Linux Training
10am on Meeting Days!

1825 Monetary Lane Suite #104 Carrollton, TX

Do a presentation at NTLUG.

What is the Linux Installation Project?

Real companies using Linux!

Not just for business anymore.

Providing ready to run platforms on Linux

Show Descriptions... (Show All) (Two Column)

LinuxSecurity - Security Advisories



  • Debian DSA-6246-1 OpenJDK Critical Info Disclosure Denial of Service
    Several vulnerabilities have been discovered in the OpenJDK Java runtime, which may result in incorrect generation of cryptographic keys, denial of service, information disclosure, XEE/XEE attacks or incorrect validation of Kerberos credentials. For the stable distribution (trixie), these problems have been fixed in


  • Debian DSA-6245-1 Imagemagick Important DoS Code Execution Fix
    Multiple security vulnerabilities were discovered in imagemagick, a software suite used for editing and manipulating digital images, which could lead to denial of service, information disclosure or potentially arbitrary code execution if malformed images are processed. For the oldstable distribution (bookworm), these problems have been fixed




LWN.net

  • Kernel prepatch 7.1-rc2
    The second 7.1 kernel prepatch is out fortesting. "It's not small, and while it's a bit early to say for sure, Ido suspect we're seeing the same continued pattern of more patches thanusual - probably due to AI tooling - that we saw in 7.0."


  • Eden: NHS goes to war against open source
    Terence Eden reportsthat the UK's NationalHealth Service (NHS) is preparing to close almost all of its open-source repositories as aresponse to LLM tools, such as Anthropic's Mythos, becoming moresophisticated at finding security vulnerabilities. He does not, to putit mildly, agree with the decision:

    The majority of code repospublished by the NHS are not meaningfully affected by any advancein security scanning. They're mostly data sets, internal tools,guidance, research tools, front-end design and the like. There isnothing in them which could realistically lead to a securityincident.

    When I was working at NHSX during the pandemic, we were soconfident of the safety and necessity of open source, we made sure theCovid Contact Tracing app was open sourced the minute it was availableto the public. That was a nationally mandated app, installed onmillions of phones, subject to intense scrutiny from hostile powers -and yet, despite publishing the code, architecture and documentation,the open source code caused zero securityincidents.

    Furthermore, this new guidance is in direct contradiction to theUK's TechCode of Practice point 3 "Be open and use open source" whichinsists on code being open.


  • [$] Version-controlled databases using Prolly trees
    Modern database and filesystems make pervasive use ofB-trees, which are treestructures optimized for storing sorted lists of keys and values on blockdevices.Dolt is an Apache 2.0-licensed project that makes clever use of avariant of a B-tree to support efficient version control for an entire database.The data structure it uses could well be of interest to other projects.


  • Security updates for Friday
    Security updates have been issued by AlmaLinux (fence-agents), Debian (chromium, dovecot, and kernel), Fedora (chromium, dotnet10.0, dotnet8.0, dotnet9.0, emacs, glow, jfrog-cli, openbao, pyp2spec, python3.6, rust-rustls-webpki, vhs, and xen), Oracle (grafana, grafana-pcp, PackageKit, sudo, vim, and xorg-x11-server), Red Hat (rhc), SUSE (avahi, bouncycastle, chromium, container-suseconnect, firewalld, gdk-pixbuf, grafana, java-25-openjdk, kernel, libixml11, libmozjs-140-0, libpng12-0, libsodium, libssh, mariadb, Mesa, ntfs-3g_ntfsprogs, openCryptoki, openexr, packagekit, prometheus-postgres_exporter, python-jwcrypto, python-mako, python-Pygments, python-pynacl, python311, python311-pyOpenSSL, python315, radare2, sed, and vim), and Ubuntu (kmod and zulucrypt).


  • [$] Restartable sequences, TCMalloc, and Hyrum's Law
    Hyrum's Law states that anyobservable behavior of a system will eventually be depended upon bysomebody. The kernel community is currently contending with a cleardemonstration of that principle. The recent work to address some restartable-sequencesperformance problems in the 6.19 release maintained the documented APIin all respects, but that was not enough; Google's TCMalloclibrary, as it turns out, violates the documented API, prevents other codefrom using restartable features, and breaks with 6.19. But the kernel'sno-regressions rule is forcing developers to find a way to accommodateTCMalloc's behavior.


  • GCC 16.1 released
    Version16.1 of the GNU Compiler Collection (GCC) has beenreleased.
    The C++ frontend now defaults to the GNU C++20 dialect and the correspondingparts of the standard library are no longer experimental. SeveralC++26 features receive experimental support, including Reflection(-freflection), Contracts, expansion statements and std::simd.
    Other changes include the introduction of an experimental compilerfrontend for the Algol68 language,ability to output GCC diagnostics in HTML form, and more.



  • Seven new stable kernels for Thursday
    Greg Kroah-Hartman has released the 7.0.3, 6.18.26, 6.12.85, 6.6.137, 6.1.170, 5.15.204, and 5.10.254 stable kernels. The 7.0.3 and6.18.26 kernels only contain fixes needed for Xen users; the others,though, have backported fixes for the recently disclosed AEAD socket vulnerability. Kroah-Hartman advisesthat all users of the other kernel series must upgrade.



  • Security updates for Thursday
    Security updates have been issued by AlmaLinux (buildah, firefox, gdk-pixbuf2, giflib, grafana, java-1.8.0-openjdk, java-21-openjdk, LibRaw, OpenEXR, PackageKit, pcs, python3.11, python3.12, python3.9, sudo, tigervnc, vim, xorg-x11-server, xorg-x11-server-Xwayland, yggdrasil, and yggdrasil-worker-package-manager), Debian (calibre, firefox-esr, and openjdk-17), Fedora (asterisk, binaryen, buildah, dokuwiki, lemonldap-ng, libexif, libgcrypt, miniupnpd, openvpn, podman, python3.9, rust-rpm-sequoia, skopeo, and xdg-dbus-proxy), Red Hat (buildah, gdk-pixbuf2, and nodejs:20), SUSE (dnsdist, libheif, openCryptoki, polkit, sed, and xen), and Ubuntu (linux-bluefield, python-marshmallow, and roundcube).


  • [$] LWN.net Weekly Edition for April 30, 2026
    Inside this week's LWN.net Weekly Edition:
    Front: Famfs; Python packaging council; Zig concurrency; pages and folios; Strawberry music manager; 7.1 merge window. Briefs: GnuPG 2.5.19; Copy Fail; Plasma security; Fedora 44; Ubuntu 26.04; Niri 26.04; pip 26.1; RIP Seth Nickell; RIP Tomáš Kalibera; Quotes; ... Announcements: Newsletters, conferences, security updates, patches, and more.


  • A security bug in AEAD sockets
    Security analysis firm Xint has disclosed a security bug in the Linux kernelthat allows for arbitrary 4-byte writes to the page cache, and which has beenpresent since 2017.The vulnerability hasbeen fixed in mainline kernels. A proof-of-concept script demonstrates how to use the flaw to corrupt a setuidbinary, which works onmultiple distributions, by requesting an AEAD-encrypted socket from user spaceand splicing a particular payload into it.A supplemental blogpost gives more details about the discovery and remediation.
    A core primitive underlying this bug is splice(): it transfers data between filedescriptors and pipes without copying, passing page cache pages by reference.When a user splices a file into a pipe and then into an AF_ALG socket, thesocket's input scatterlist holds direct references to the kernel's cached pagesof that file. The pages are not duplicated; the scatterlist entries point at thesame physical pages that back every read(), mmap(), andexecve() of that file.


LXer Linux News


  • 9to5Linux Weekly Roundup: May 3rd, 2026
    The 290th installment of the 9to5Linux Weekly Roundup is here for the week ending May 3rd, 2026, keeping you updated on the most important developments in the Linux world.


  • Adiuvo Explorer Board aims to bring Artix UltraScale+ FPGA to $99 platform
    Adiuvo is developing the Explorer Board, a compact FPGA platform built around the Artix UltraScale+ AU7P, targeting embedded, signal processing, and high-speed I/O applications. The design aims to provide access to UltraScale+ capabilities at a lower price point. The design is based on the AU7P FPGA, which provides approximately 37K LUTs, 75K flip-flops, 216 DSP […]


  • Many Exciting Google Summer of Code 2026 Projects & A Lot Of AI
    This week Google announced the selected Google Summer of Code "GSoC" 2026 projects for providing stipends to student developers for engaging in different open-source projects. This year a lot of open-source projects involve AI/LLM adoption but there are also a number of other interesting student projects at large from GNOME Mutter GPU reset recovery to adding new features to FreeBSD...






  • New NTFS Driver Sees More Fixes With Linux 7.1-rc2
    One of the most prominent changes with the upcoming Linux 7.1 kernel release is the introduction of the new NTFS driver in the Linux 7.1 kernel. This new driver provides more features and better performance than the Paragon NTFS3 driver that's been in the kernel the past few years and far better off than the original NTFS read-only driver that previously was in the kernel and for which this new driver is based. Needless to say it's also a big improvement over the NTFS-3G user-space FUSE driver too...



Error: It's not possible to reach RSS file http://services.digg.com/2.0/story.getTopNews?type=rss&topic=technology ...

Slashdot

  • Can Investors Trust AI Sales Figures? Asks Wall Street Journal Opinion Piece
    A Wall Street Journal opinion piece warns of "a troubling trend" in AI's growth. "Rather than selling software, some AI companies are paying their partners to use it." It cites OpenAI's $1.5 billion joint venture with private-equity firms, Anthropic's $200 million contribution to a private-equity firm joint venture, and Google's $750 million subsidization of Gemini's adoption by consulting firms. "These agreements muddy the distinction between a company's sound growth trajectory and artificial financial engineering."[T]he scale and structure of the recent AI deals go beyond standard incentive mechanisms... When a seller pays customers to buy its products, it is unclear if its revenue growth reflects vibrant demand or a willingness to accept subsidies. Slashdot reader destinyland writes:This warning comes from a prominent figure in the investing community. For six years Robert Pozen was chairman of America's oldest mutual fund company, after five years at Fidelity. An advocate for corporate governance, he's currently a lecturer at MIT's business school (and the author of the book Remote Inc.: How to Thrive at Work...Wherever You Are). "As AI companies prepare initial public offerings, investors should scrutinize their numbers closely," Pozner writes, warning about "time-limited financial support". "In evaluating AI sales figures, analysts should consider the distorted incentives that the recent financing deals create," writes Pozner: Private-equity firms, enticed by promised returns, might demand rapid rollouts of AI products, rather than ensuring their orderly and safe development. Portfolio companies of private-equity firms may embrace AI tools not because they are needed but because adoption is mandated by their owners. Consultants may favor one set of AI models based on the subsidy instead of the merits. If guarantees and subsidies are major factors in the rapid adoption of AI tools, investors should be skeptical of AI companies' revenue projections. Many of their customers enticed by consultants will stop paying full price when the financial incentives are gone. Many of the portfolio companies of private-equity firms could back away from selected AI tools once these joint ventures expire. The challenge with evaluating these AI financing deals is the lack of transparency. At present, AI vendors don't separate revenue driven by subsidies or joint ventures from standard sales. The lesson from the telecom debacle is that financial engineering can obscure, for years, the difference between real customer demand and demand driven by incentives. When AI companies begin to finance their own product distribution, guaranteeing returns to investors and subsidizing sales, it's a signal for investors to dig deeper.Investing in an AI company? Ask what percentage of enterprise revenue is coming from subsidized channels or joint ventures, Pozner suggests. And the renewal/retention rate for customers not supported by subsidies or joint ventures...


    Read more of this story at Slashdot.


  • Roblox Blames Age-Verification Rollout for Lowered Growth. Stock Tumbles 22%
    Age verification became mandatory for chat access on Roblox in January — and Friday morning Quartz reported it's apparently impacted the company's financials:Roblox cut its full-year 2026 bookings forecast by roughly $900 million at the midpoint on Thursday, blaming stronger-than-expected headwinds from its mandatory age-verification rollout on an audience that skews heavily toward children and teenagers. Full-year 2026 bookings are now projected at $7.33 billion to $7.60 billion, a range that sits roughly $900 million below the prior guidance of $8.28 billion to $8.55 billion; analysts had expected $8.38 billion, according to Yahoo Finance. Roblox stock fell almost 22% in premarket trading.... Daily active users rose 35% year over year to 132 million, while hours engaged climbed 43% to 31 billion hours... Daily Active Users and hours engaged fell below forecasts of 143.8 million and 33.68 billion, respectively, according to Yahoo Finance... Users who have not completed age checks have faced restricted communication features, and the process has weighed on the platform's ability to bring in new users. Russia's blocking of the platform, which took effect in December 2025, added further drag on user growth, according to Yahoo Finance. As of the end of the first quarter, 51% of global daily active users had completed age verification, with 65% of U.S. users having done so, Roblox said.... The safety push has come with legal costs. Roblox accrued $57 million in the first quarter for settlements and settlement proposals with certain states over youth-related consumer protection and digital safety matters, with payments structured over multiple years, the company said. Roblox acknowledged in a letter to shareholders that "our aggressive push to enhance safety lowers our expectations for topline growth in 2026." But they argued that it also "makes our platform fundamentally better and amplifies the long-term growth potential of Roblox through more effective content targeting, tailored communication experiences, and improved community sentiment."


    Read more of this story at Slashdot.


  • NetHack 5.0 Released
    "So yesterday the Devteam (it is always the Devteam) released version 5.0 of legendary and venerable rogueike compuer game NetHack," writes the Rogue-like games column @Play. "It is 39 years old..." MilenCent (Slashdot reader #219,397) writes: In addition to play changes it's left for players to discover, this version updates the code to compile with C99, makes it much easier to cross compile the code for other systems than the one running, and now uses Lua for its dungeon generation. Happy hacking! For new players, "Nethack 5.0 now has an optional tutorial in the early phases of the game that might help you," notes the Rogue-like games column @Play:Three systems binaries are provided: Windows, MS-DOS and Amiga. Yes, Nethack still supports MS-DOS, and yes, it still supports classic Amiga: it explicitly supports AmigaDOS 3.0, meaning it can still run on 68000 machines... That these are the only systems they provide binaries for shouldn't be seen as an indication that these are the "most important" platforms for Nethack, it's more that, since it's entirely open source, building it yourself is entirely possible, and more expected than with most software. Nethack can be built for Linux, Windows 8-11, AmigaDOS, MacOS (I'm not sure if this includes classic Mac too but it might), Windows CE (wow), OS/2 (additional wow), BeOS, VMS and multiple Unixes... Another option is to play through public Nethack servers. The most popular of these are probably alt.org and Hardfought.


    Read more of this story at Slashdot.


  • OpenAI Introduces AI-Generated Pets for Its Codex App
    "Vibe coding just got a whole lot more adorable," writes Engadget:OpenAI introduced AI-generated pets to the Codex app, its agentic tool that helps with coding. These "optional animated companions" don't do any coding themselves, but serve as a floating overlay that can tell you what Codex is working on, notify you when Codex completes a task or whether it needs your input on something. The new feature lets developers see Codex's active thread, without having to switch away from your current open app. "The feature ships with eight built-in variations — including a cat and dog," reports Mashable. "But the more interesting play is the custom pet creator." Users can prompt Codex directly to generate their own companion, then share it online. A quick scroll through the homepage reveals the community has already gotten to work. Current creations include Goku, Patrick Star, Microsoft's long-retired Clippy, OpenAI CEO Sam Altman, Anthropic CEO Dario Amodei, and — naturally — a goblin. There's also Grogu, Dobby, a tiny Bob Rossi, and a "Doge-style Shiba Inu dog"...


    Read more of this story at Slashdot.


  • AI Cameras are Being Deployed Across the Western US for Early Detection of Wildfires
    The Associated Press reports:On a March afternoon, artificial intelligence detected something resembling smoke on a camera feed from Arizona's Coconino National Forest. Human analysts verified it wasn't a cloud or dust, then alerted the state's forest service and largest electric utility. One of dozens of AI cameras installed for the utility Arizona Public Service had spotted early signs of what came to be known as the Diamond Fire. Firefighters raced to the scene and contained the blaze before it grew past 7 acres (2.8 hectares). As record-breaking heat and an abysmal snowpack raise concerns about severe wildfires, states across the fire-prone West are adding AI to their wildfire detection toolbox, banking on the technology to help save lives and property. Arizona Public Service has nearly 40 active AI smoke-detection cameras and plans to have 71 by summer's end, and the state's fire agency has deployed seven of its own. Another utility, Xcel Energy in Colorado, has installed 126 and aims to have cameras in seven of the eight states it serves by year's end... ALERTCalifornia is a network of some 1,240 AI-enabled cameras across the Golden State that work similar to the system in Arizona.... Pano AI, whose technology combines high-definition camera feeds, satellite data and AI monitoring, has seen a growing interest in its cameras since launching in 2020. They've been deployed in Australia, Canada and 17 U.S. states, including Oregon, Washington and Texas... Last year, its technology detected 725 wildfires in the U.S., the company said... Cindy Kobold, an Arizona Public Service meteorologist, said the technology notifies them about 45 minutes faster on average than the first 911 call.


    Read more of this story at Slashdot.


  • Carbon Pollution Is Making Food Less Nutritious, Risking the Health of Billions
    A new meta-analysis found nutrients in food decreased over the last 40 years, reports the Washington Post. "Many of humanity's most important crops — including wheat, potatoes, beans — contain fewer vitamins and minerals than they did a generation ago." "The invisible culprit behind this damaging phenomenon? Carbon dioxide pollution."Surging concentrations of carbon in the atmosphere, caused largely by burning fossil fuels, have produced potent changes in the way plants grow — from increasing their sugar content to depleting essential nutrients like zinc... "The diets we eat today have less nutritional density than what our grandparents ate, even if we eat exactly the same thing," said Kristie Ebi, a professor at the University of Washington's Center for Health and the Global Environment. People in wealthy countries with strong health care systems will have many tools to cope with the change, experts said. But for the world's poorest and most vulnerable, the consequences could be devastating. One study concluded that by the middle of the century the phenomenon could put more than a billion additional women and children at risk of iron-deficiency anemia — a condition that can cause pregnancy complications, developmental problems and even death. Meanwhile, some 2 billion people across the globe who already suffer from some form of nutrient shortage could see their health problems grow even worse. "The scale of the problem is huge," Ebi said. Plants depend on carbon dioxide to perform photosynthesis — but that doesn't mean they grow better when there's more carbon in the air, scientists say. A sweeping survey of changes among 32 compounds in 43 crops found that nearly every plant that humans eat is harmed by rising CO2 levels... On average, they found, nutrients have already decreased by an average 3.2 percent across all plants since the late 1980s, when the concentration of carbon dioxide in the atmosphere was about 350 parts per million. Thanks to long-time Slashdot reader GameboyRMH for sharing the news.


    Read more of this story at Slashdot.


  • Robots Are Building Clay Homes In Texas Using Dirt From the Ground
    A startup south of Austin is using robots to build homes out of clay pulled directly from the ground, reports a local news station:The materials are gathered on site, mixed, and placed on a build plate. From there, a robot lowers from above, picks up the clay with a claw, carries it to the wall and drops it into place. Later, the same robot switches tools, using a hammer attachment to pound the material into shape. "It's kind of trying to replicate how a human might build an adobe house," said software engineer Anastasia Nikoulina... Using machine learning, the system constantly evaluates the wall, adjusting how it builds to create a flat, solid surface... The project is underway at Proto-Town, a ranch between Lockhart and Luling where startups test new technologies, from anti-drone systems to nuclear reactors. The company plans to build their next home on the property, with hopes to do more than 20 homes over the next year.


    Read more of this story at Slashdot.


  • It's Goodbye Time for Jeeves and Ask.com - Relics of Yesterday's Internet
    A 1999 press release bragged "Jeeves" answered 92.3 million questions in just three months. "In the digital wilds of Y2K, we came to him with our most probing questions," remembers the New York Times — whether it was Britney Spears or tamagotchis: We asked, and he answered: Jeeves, the digital butler of information, the online valet who led us into the depths of cyberspace. Now, like so many other relics of yesterday's internet, Jeeves — and his home, Ask.com — are no more. After almost 30 years, the question-and-answer service and former search engine shuttered on Friday. "To you — the millions of users who turned to us for answers in a rapidly changing world — thank you for your endless curiosity, your loyalty, and your trust," the company said in a notice posted on its now-defunct website... Created in Berkeley, Calif., in the days of the dot-com gold rush, Ask Jeeves first appeared on computer screens in 1996.... Their mascot, Jeeves, was modeled on the clever English butler character from the famed P.G. Wodehouse book series. Its search function was simple — type in a question, get an answer. But the quality of its responses was uneven, and the website was quickly eclipsed by Google and Yahoo as the world's go-to search engines. The site was bought by InterActive Corp. for more than $1 billion in 2005, and was given an injection of cash to help it compete as a search engine. It rebranded as Ask.com and as part of the reimagining, the site also ditched the character of Jeeves in 2006. Scrappy but inventive, the site was one of the first to introduce hyperlocal map overlays to its searches and incorporate thumbnails of webpages. "They are doing a lot of clever and interesting things," a Google executive noted of Ask.com at the time. Still, Ask.com struggled to compete and returned in 2010 to its bread and butter: question-and-answer style prompts. Even then, it faltered against newer, crowdsourced iterations like Quora and Google's unyielding march to the internet fore — the platform now dominates search traffic, and the world's general experience of the internet. A statement at Ask.com ends "by thanking its millions of users, and saying, 'Jeeves' spirit endures'," notes this article from Engadget:As sad as it is to see a relic of the early Internet days fade into obscurity, we still have Ask Jeeves to thank for why some users still punch in full questions when querying Google. On top of that, Jeeves was built to provide detailed answers in natural language, which could have arguably acted as a precursor to today's AI chatbots like ChatGPT. "Now, Ask.com joins the Internet graveyard that includes competitors like AltaVista, which shut down in 2013," the article points out. "With Ask.com gone, alongside AIM and AOL dial-up services also sunsetting, we're truly coming to an end of a specific era of the Internet." And the New York Times argues the memory of Jeeves now rests somewhere between Limewire and Beanie Babies... Slashdot reader BrianFagioli calls it "a quiet reminder of how quickly the web moves, and how even widely recognized names can drift into obscurity once the underlying technology leaves them behind."


    Read more of this story at Slashdot.


  • Former Nintendo Executive Says Amazon Once Requested 'Illegal' Price Discounts
    Amazon once tried to pressure Nintendo to break the law, says former Nintendo of America President Reggie Fils-Aimé. At a recent NYU lecture, he describes a conversation with an Amazon executive, Kotaku reports:"Amazon was looking to get bigger into the video game space," said Fils-Aimé. "Amazon's mentality back then is they wanted to have the lowest price out in the marketplace, even lower than Walmart... Essentially what Amazon wanted (was an) obscene amount of support, financial support, so they could have the lowest price and beat Walmart. I literally said to the executive, 'You know that's illegal, right? I can't do that'...." At the time, the Wii and DS were Nintendo's best selling hardware in history. Amazon originally sold books, but in the 2000s rapidly expanded with cheaper discounts to became a one-stop shop for almost everything. Everything except Nintendo, that is.... "Literally we stopped selling to Amazon," Fils-Aimé continued, "and it's because I wasn't going to do something illegal. I wasn't going to do something that would put at risk the relationship we have with other retailers." "The two sides have since made amends," notes the Verge, "and you can buy a Switch 2 through Amazon. But for a long time, Nintendo consoles had been largely unavailable on the site."


    Read more of this story at Slashdot.


  • ChatGPT Became So Obsessed With Goblins That OpenAI Had to Intervene
    The Wall Street Journal reports that OpenAI "recently gave its popular ChatGPT strict instructions. Stop talking about goblins."Recent models of the artificial-intelligence chatbot have been bringing up the creatures in conversations with users seemingly out of the blue, as well as gremlins, trolls and ogres. The goblin-speak caught the attention of programmers, who are often heavy users of the bot. Barron Roth, a 32-year-old product manager at a tech company, said the bot referred to a flaw in his code as a "classic little goblin." He said he counted more than 20 times it mentioned goblins, without any prompting... Several users speculated that goblin terminology was how the model characterized itself, in lieu of identifying as a person with a soul. Then OpenAI decided enough was enough. "Never talk about goblins, gremlins, raccoons, trolls, ogres, pigeons, or other animals or creatures unless it is absolutely and unambiguously relevant to the user's query," reads an open source line in ChatGPT's base instructions for its coding assistant. The Journal calls this "a reminder that even as AI companies tout one advance after another in their technology, they are sometimes baffled by the things their own models do...." While training a "nerdy" personality for their model's customization feature, "We unknowingly gave particularly high rewards for metaphors with creatures," OpenAI explained in a log post. And "From there, the goblins spread."When we looked, use of "goblin" in ChatGPT had risen by 175% after the launch of GPT-5.1, while "gremlin" had risen by 52%... With GPT-5.4, we and our usersâ noticed an even bigger uptick in references to these creatures... Nerdy accounted for only 2.5% of all ChatGPT responses, but 66.7% of all "goblin" mentions in ChatGPT responses... The rewards were applied only in the Nerdy condition, but reinforcement learning does not guarantee that learned behaviors stay neatly scoped to the condition that produced them. Once a style tic is rewarded, later training can spread or reinforce it elsewhere, especially if those outputs are reused in supervised fine-tuning or preference data. It all started because the "nerdy" personality's prompt had said "You must undercut pretension through playful use of language. The world is complex and strange, and its strangeness must be acknowledged, analyzed, and enjoyed..." Now OpenAI calls this "a powerful example of how reward signals can shape model behavior in unexpected ways, and how models can learn to generalize rewards in certain situations to unrelated ones." But "fans of goblins don't have to fear," notes the Wall Street Journal. "OpenAI provided a command in its blog post that would remove its creature-suppressing instructions."


    Read more of this story at Slashdot.


The Register

  • Hope your holiday was horrid: You botched the last thing you did before leaving
    That box-full-of-old-tech-you-should-probably-have-thrown-out-but-kept-just-in-case got a techie in trouble
    Who, Me? Monday is upon us once again and The Register hopes that when you arrive at your desk, all is well. We offer that sentiment because we use the first day of the working week to bring you a fresh instalment of "Who, Me?" – the reader-contributed column in which you confess to making mistakes, and explain how you survived them.…



  • Five Eyes spook shops warn rapid rollouts of agentic AI are too risky
    Prioritize resilience over productivity, say CISA, NCSC and their friends from Oz, NZ, Canada
    Information security agencies from the nations of the Five Eyes security alliance have co-authored guidance on the use of agentic AI that warns the technology will likely misbehave and amplifies organizations’ existing frailties, and therefore recommend slow and careful adoption of the tech.…





  • Royal Navy chief backs drones, autonomous weapons in ‘Hybrid Navy’
    Plan mixes crewed ships, robot escorts, and long-range strike to bolster a stretched fleet
    The leader of Britain’s Royal Navy has outlined a “Hybrid Navy” built on a mix of crewed, uncrewed, and autonomous platforms to ensure it can continue to defend the nation and operate overseas.…


  • Job's a good 'un: Bank of England tech project wins watchdog praise
    PAC: Now why can't everybody else in public sector do it like this?
    Parliament's spending watchdog has held up a successful large-scale public sector tech transformation as a rare example worth emulating, in a striking departure from the usual diet of failure and overspend.…




Polish Linux

  • Security: Why Linux Is Better Than Windows Or Mac OS
    Linux is a free and open source operating system that was released in 1991 developed and released by Linus Torvalds. Since its release it has reached a user base that is greatly widespread worldwide. Linux users swear by the reliability and freedom that this operating system offers, especially when compared to its counterparts, windows and [0]


  • Essential Software That Are Not Available On Linux OS
    An operating system is essentially the most important component in a computer. It manages the different hardware and software components of a computer in the most effective way. There are different types of operating system and everything comes with their own set of programs and software. You cannot expect a Linux program to have all [0]


  • Things You Never Knew About Your Operating System
    The advent of computers has brought about a revolution in our daily life. From computers that were so huge to fit in a room, we have come a very long way to desktops and even palmtops. These machines have become our virtual lockers, and a life without these network machines have become unimaginable. Sending mails, [0]


  • How To Fully Optimize Your Operating System
    Computers and systems are tricky and complicated. If you lack a thorough knowledge or even basic knowledge of computers, you will often find yourself in a bind. You must understand that something as complicated as a computer requires constant care and constant cleaning up of junk files. Unless you put in the time to configure [0]


  • The Top Problems With Major Operating Systems
    There is no such system which does not give you any problems. Even if the system and the operating system of your system is easy to understand, there will be some times when certain problems will arise. Most of these problems are easy to handle and easy to get rid of. But you must be [0]


  • 8 Benefits Of Linux OS
    Linux is a small and a fast-growing operating system. However, we can’t term it as software yet. As discussed in the article about what can a Linux OS do Linux is a kernel. Now, kernels are used for software and programs. These kernels are used by the computer and can be used with various third-party software [0]


  • Things Linux OS Can Do That Other OS Cant
    What Is Linux OS?  Linux, similar to U-bix is an operating system which can be used for various computers, hand held devices, embedded devices, etc. The reason why Linux operated system is preferred by many, is because it is easy to use and re-use. Linux based operating system is technically not an Operating System. Operating [0]


  • Packagekit Interview
    Packagekit aims to make the management of applications in the Linux and GNU systems. The main objective to remove the pains it takes to create a system. Along with this in an interview, Richard Hughes, the developer of Packagekit said that he aims to make the Linux systems just as powerful as the Windows or [0]


  • What’s New in Ubuntu?
    What Is Ubuntu? Ubuntu is open source software. It is useful for Linux based computers. The software is marketed by the Canonical Ltd., Ubuntu community. Ubuntu was first released in late October in 2004. The Ubuntu program uses Java, Python, C, C++ and C# programming languages. What Is New? The version 17.04 is now available here [0]


  • Ext3 Reiserfs Xfs In Windows With Regards To Colinux
    The problem with Windows is that there are various limitations to the computer and there is only so much you can do with it. You can access the Ext3 Reiserfs Xfs by using the coLinux tool. Download the tool from the  official site or from the  sourceforge site. Edit the connection to “TAP Win32 Adapter [0]


OSnews

  • GNOME is good, actually
    While Im normally a KDE user, I do keep close tabs on various other desktop environments, and install and set them up every now and then to see how theyre fairing, what improvements theyve made, and ultimately, if my preference for KDE is still warranted. This usually means setting up a nice OpenBSD installation for Xfce, Fedora for GNOME, and less often others for some of the more niche desktop environments. Since GNOME 50 was just released, guess whos time in the round is up? Since everybodys already made up their mind about their preferred desktop eons ago, with upsides and downsides debated far past their expiration date, Im not particularly interested in reviewing desktop environments or Linux distributions. However, after asking around on Fedi, it seemed there was quite a bit of interest in an article detailing how I set up GNOME, what changes I make to the defaults, which extensions I use, what tweaks I apply, and so on. Of course, everything described in this article is highly personal, and Im not arguing that this is the optimal way to tweak GNOME, that the extensions I use are the best ones, or that any visual modifications I make are better than whatever defaults GNOME uses. No, my goal with this article is twofold: one, to highlight that GNOME is a lot more configurable, extensible, and malleable than common wisdom on the internet would have you believe. Its not KDE or one of those cobbled-together tiling Wayland desktops, but its definitely not as rigid as you might think. And two, that GNOME is good, actually. Tools of the trade The first thing I do is install a few crucial tools that make it easier to modify and tweak GNOME. I really dislike lists in articles, but I will begrudgingly use one here: After installing all of these tools, the actual tweaking can commence. Visual tweaks I didnt use to like GNOMEs Adwaita visual style, but over the years, it started growing on me to the point where I dont actively dislike it anymore. With the arrival of libadwaita, it has also become effectively impossible to theme modern GNOME applications, so even if you do change to something else, many of your applications wont follow along. If consistency is something you care about, youll stick to Adwaita, but that leaves one problem unresolved: applications that still use GTK3. These applications will follow a much older version of Adwaita, making them stand out like eyesores among all the modern GTK4 stuff. Luckily, since GTK3 applications are still properly themable, this is easily fixed: just install the adw-gtk3 theme, either by hand, or through your distributions repositories. To enable it, first install the user themes extension through Extension Manager, and then enable the theme in GNOME Tweaks for Legacy Applications!. Any potential GTK3 applications you still use will now integrate nicely with modern libadwaita applications. The one part of GNOME I really do deeply dislike is its icon theme. I cant quite explain why I dislike this icon set so much, but it runs deep, so one of the very first things I do is replace the default GNOME icon set with my personal favourite, Qogir. This is a popular icon set, so its usually available in your distributions repositories, but I always install it from its GitHub page. Changing GNOMEs icon set is as simple as selecting it in GNOME Tweaks. You cant get much more personal taste than an icon set, and there are dozens of amazing sets to choose from in the Linux world. Changing them out and trying out new ones is stupidly easy, and its definitely worth looking at a few that might be more pleasing to you than GNOMEs (or KDEs) default. Lastly, I open Add Water and enable the amazing GNOME theme for LibreWolf. Add Water basically makes this as easy as flipping a switch, so theres no need to copy any files into your LibreWolf profile or whatever. The application also provides a few more small tweaks to fiddle with, like enabling standard tab widths so tabs dont grow and shrink as you close and open tabs, moving the bookmarks bar below the tab bar, and many more. Extensions Since the release of GNOME 3 in 2011, extensions have been the most capable way to modify GNOMEs look, behaviour, and feature set. As far as I can tell, while the extension framework is an official part of the GNOME Shell, the extensions themselves are all third-party and not part of a vanilla GNOME installation. By now, there are over 2800 listed extensions, but that number includes abandoned extensions so its hard to determine the actual number of currently-maintained ones. Whatever the actual number is, theres bound to be things in there youre going to want to use. Here are the extensions I have installed. Lets just start at the top and work our way down. I guess Im forced to do another list. There are countless more extensions to choose from, and youre definitely going to find things you never even thought could be useful. Miscellaneous tweaks Theres a few other things I modify. In GNOME Tweaks, I make it so that double-clicking a windows titlebar minimises it while right-clicking it lowers it; two features I picked up during my years as a BeOS user that I absolutely refuse to give up. I configure the dock from Dash to Dock so that it always remains on top and never hides itself, no matter the circumstances. In Settings, I disable virtual desktops entirely (I dont like virtual desktops), and I make sure tap-to-click is disabled (if Im on a laptop). GNOME is good, actually After making all of these changes, I feel quite comfortable using GNOME, at least on my laptop. Its a nice, coherent experience, and offers what is probably the most polished graphical user interface you can find on Linux, even if it isnt the most full-featured. The third-party application ecosystem, through modern


  • How fast is a macOS VM, and how small could it be?
    To assess how small a macOS VM could be, I ran the same VM of macOS 26.4.1 on progressively smaller CPU core and memory allocations, using my virtualiser Viable. The VM’s display window was set to a standard 1600 x 1000, and I ran Safari through its paces and performed some lightweight everyday tasks, including Storage analysis in Settings. Starting with 4 virtual cores and 8 GB vRAM, where the VM ran perfectly briskly with around 5 GB of memory used, I stepped down to 3 cores and 6 GB, to discover that memory usage fell to 3.9 GB and everything worked well. With just 2 cores and 4 GB of memory only 3.1 GB of that was used, and the VM continued to handle those lightweight tasks normally. ↫ Howard Oakley This is good news for people interested in the MacBook Neo who may also want to run a macOS virtual machine on it.


  • Email is crazy
    Email is like those creaking old Terminators from the ’70s which continue to function without complaining. Designed for a world that doesn’t exist anymore, it has optional encryption, no built-in auth, three⁺ retrofitted security layers bolted on top, an unstandardized filtering layer and many more quirks. Yet billions of emails arrive correctly every single day. Email is not elegant but nonetheless it is Lindy. In the new age of agentic AI, we can only expect it to metamorphose into another dimension. ↫ Saurabh Sam! Khawase The fact that email is as complicated as it is bad enough, but having it be so dominantly controlled by only a few large gatekeepers like Google and Microsoft surely isnt helping either. I feel like email is no longer really a technology individuals can actively partake in at every level; it feels much more like WhatsApp or iMessage or whatever in that we just get to send messages, and thats it. Running your own mail sever isnt only a complex endeavour, its also a continuous cat-and-mouse game with companies like Google and Microsoft to ensure you dont end up on some shitlist and your emails stop arriving. I settled on Fastmail as my email service, and it works quite well. Still, I would love to be able to just run my own email server, or have some of my far more capable friends run one for a small group of us, but its such a daunting and unpleasant effort few people seem to have the stomach and perseverance for it.


  • The day I logged 1 in every 2000 public IPv4: visualizing the AI scraper DDoS
    What if you run a few online services for you and your friends, like a small git instance and a grocery list service, but you get absolutely hammered by AI! scrapers? I cannot impress upon you, reader, that this is not only an attack that is coordinated, it is an attack that is distributed. I run a small set of services, basically only for me and my friends. I am not a hyperscaler, I am not a tech company, I am not even a small platform. I have a git forge where I put the shit I make, and a couple other services where me and my friends backup our files or write our grocery lists. I am not fucking Meta and I cannot scale the fuck up just because OpenAI or Anthropic or Meta or whoever is training a model that weeks wants to suck all the content out of my VPS ONCE MORE until it’s dry. ↫ lux at VulpineCitrus So how much traffic did the author of this piece, lux, get from AI! scraping bots? Within a time period of 24 hours, they were hammered by 2040670 unique IP addresses, 98% of which were IPv4 addresses, which means that 1 out of every 2000 publicly available IPv4 addresses were involved in the scraping. Together, they performed over 5 million requests. And just to reiterate: they were scraping a few very small, friends-only services run by some random person. This is absolutely insane. If, at this point in time, with everything that we know about just how deeply unethical every single aspect of AI! is, youre still using and promoting it, what is wrong with you? If youre so addicted to your AI! girlfriends unending stream of useless, forgettable sycophantic slop, despite being aware of the damage youre doing to those around you, theres something seriously wrong with you, and you desperately need professional help. You dont need any of this. The world doesnt need any of this. Nobody likes the slop AI! regurgitates, and nobody likes you for enabling it. Get help.


  • Earliest 86-DOS and PC-DOS code released as open source
    Microsoft is continuing its efforts to release early versions of DOS as open source, and today weve got a special one. We’re stoked today to showcase some newly available source code materials that provide an even earlier look into the development of PC-DOS 1.00, the first release of DOS for the IBM PC. A dedicated team of historians and preservationists led by Yufeng Gao and Rich Cini has worked to locate, scan, and transcribe the stack of DOS-era source listings from Tim Paterson, the author of DOS. The listings include sources to the 86-DOS 1.00 kernel, several development snapshots of the PC-DOS 1.00 kernel, and some well-known utilities such as CHKDSK. Not only were these assembler listings, but there were also listings of the assembler itself! This work offers rare insight into how MS-DOS/PC-DOS came to be, and how operating system development was done at the time, not as it was later reconstructed. ↫ Stacey Haffner and Scott Hanselman Its wild that the source code had to be transcribed from paper, including notes and changes. You can find more information about the process on Gao’s website and Cini’s website.


  • Apple gives up on Vision Pro, disbands Vision Pro team
    When Apple unveiled the Vision Pro, almost three (!) years ago, I concluded: If there’s one company that can convince people to spend $3500 to strap an isolating dystopian glowing robot mask onto their faces it’s Apple, but I still have a hard time believing this is what people want. ↫ Thom Holwerda at OSNews (quoting myself is weird) MacRumors Juli Clover, today: Apple has all but given up on the Vision Pro after the M5 model failed to revitalize interest in the device, MacRumors has learned. Apple updated the Vision Pro with a faster M5 chip and a more comfortable band in October 2025, but there were no other hardware changes, and consumers still werent interested. Apple has apparently stopped work on the Vision Pro and the Vision Pro team has been redistributed to other teams within Apple. Some former Vision Pro team members are working on Siri, which is not a surprise as Vision Pro chief Mike Rockwell has been leading the Siri team since March 2025. ↫ Juli Clover at MacRumors VR  what the Vision Pro is, whether Apples marketing likes to say it or not  has proven to be good for exactly two things: games and porn. The Vision Pro has neither. It was destined to be a flop from the start, as nobody wants to strap an uncomfortable computer to their face that does less than all of the other computers they already have, and what it does do, it does worse. I do wonder if this makes the Vision Pro the most expensive flop in human history. Has any company ever spent more on a product that failed this spectacularly?


  • Apple wants to kill your Time Capsule, but they run NetBSD so they cant
    It seems like Apple is finally going to remove support for AFP from macOS, twelve years after first moving from AFP to SMB for its default network file-sharing technology. This change shouldnt impact most people, as its highly unlikely youre using AFP for anything in 2026. Still, there is one small group of people to whom this change has an actual impact: owners of Apples Time Capsule devices. Time Capsules only support AFP and SMB1, and with SMB1 being removed from macOS ages ago, and now AFP being on the chopping block as well, macOS 27 would render your Time Capsule more or less unusable. Its important to note that the last Time Capsule sold by Apple, the fifth generation, was released in 2013, and the product line as a whole was discontinued in 2018. If you bought a Time Capsule in the twilight years of the lines availability, I think you have a genuine reason to be perturbed by Apple cutting you off from your product if you upgrade to macOS 27, but at least you have the option of keeping an older version of macOS around so you can keep interacting with your time Capsule. It still feels like a bit of a shitty move though, as those fifth generation models came with up to 3TB of storage, which can still serve as a solid NAS solution. Thank your lucky stars, then, that open source can, as usual, come to the rescue when proprietary software vendors do what they always do and screw over their customers. Did you know every generation of Time Capsule actually runs NetBSD, and that its trivially easy to add support for Samba 4 and SMB3 authentication to your Time Capsule, thereby extending its life expectancy considerably? TimeCapsuleSMB does exactly that. If the setup completes successfully, your Time Capsule will run its own Samba 4 server, advertise itself over Bonjour (show up automatically in the Network! folder on macOS), and accept authenticated SMB3 connections from macOS. You should then be able to open Finder, choose Connect to Server, and use a normal SMB URL instead of relying on Apple’s legacy stack. You should also be able to use the disk for Time Machine backups. ↫ TimeCapsuleSMB Its compatible with both NetBSD 4 and NetBSD 6-based Time Capsules, although youll need to run a single SMB activation command every time a NetBSD 4-based Time Capsule reboots. This will also disable any AFP and SMB1 support, but that is kind of moot since those are exactly the technologies that dont and wont work anymore once macOS 27 is released. The installation is also entirely reversible if, for whatever reason, you want to undo the addition of Samba 4. This whole saga is such an excellent example of why open source software protects users rights, by design.


  • Dillo 3.3.0 released
    Dillo is an amazing web browser for those of us who want their web browsing experience to be calmer and less flashing. Dillo also happens to be a very UNIX-y browser, and their latest release, 3.3.0, underlines that. A new dilloc program is now available to control Dillo from the command line or from a script. It searches for Dillo by the PID in the DILLO_PID environment variable or for a unique Dillo process if not set. ↫ Dillo 3.3.0 release notes You can use this program to control your Dillo instance, with basic commands like reloading the current URL, opening a new URL, and so on, but also things like dumping the current pages contents. I have a feeling more commands and features will be added in future releases, but for now, even the current set of commands can be helpful for scripting purposes. Im sure some of you who live and die in the terminal are already thinking of all the possibilities here. You can now also add page actions to the right-click context menu, so you can do things like reload a page with a Chrome curl impersonator to avoid certain JavaScript walls. This, too, is of course extensible. Dillo 3.3.0 also brings experimental support for building the browser with FLTK 1.4, and implemented a fix specifically to make OAuth work properly.


  • Ubuntu is going to integrate AI!, but Canonical remains vague about the how and why
    Ubuntu, being one of the more commercial Linux distributions, was always going to jump on the AI! bandwagon, and Jon Seager, Canonicals VP Engineering, published a blog post with more details. Throughout 2026 we’ll be working on enabling access to frontier AI for Ubuntu users in a way that is deliberate, secure, and aligned with our open source values. By focusing on the combination of education for our engineers, our existing knowledge of building resilient systems and our strengthening silicon partnerships, we will deliver efficient local inference, powerful accessibility features, and a context-aware OS that makes Ubuntu meaningfully more capable for the people who rely on it Ubuntu is not becoming an AI product, but it can become stronger with thoughtful AI integration. ↫ Jon Seager at Ubuntu Discourse The problem with this entire post is that, much like all other corporate communications about AI!, its all deceptively vague, open-ended, and weasely. Adjectives like focused!, principled!, thoughtful!, and tasteful! dont really mean anything, and leave everything open for basically every type of slop AI! feature under the sun. Their claims about open weights and open source models are also weakened by words like favour! and where possible!, again leaving the door wide open for basically any shady AI! companys models and features to find their way into your default Ubuntu installation. Theres also very little in terms of concrete plans and proposed features, leaving Ubuntu users in the dark about what, exactly, is going to be added to their operating system of choice during the remainder of the year. Theres mentions of improved text-to-speech/speech-to-text and text regurgitators, but thats about it. None of it feels particularly inspired or ground-breaking, and the veneer of open source, ethical model creation, and so on, is particularly thin this time around, even for Canonical. I dont really feel like I know a lot more about Canonicals AI! intentions for Ubuntu after reading this post than I did before, other than Ubuntu users might be able to generate text in their email client or whatever later this year. Is that really something anybody wants?


  • If 64bit Windows 11 contains a copy of 32bit explorer.exe, could you run it as its shell?
    Raymond Chen published a blog post about how a crappy uninstaller on Windows caused a mysterious spike in the number of Explorer (Windows graphical shell) crashes. It turns out the buggy uninstaller caused repeated crashes in the 32bit version of Explorer on 64bit systems, and  hold on a minute. The how many bits on the what now? The 32-bit version of Explorer exists for backward compatibility with 32-bit programs. This is not the copy of Explorer that is handling your taskbar or desktop or File Explorer windows. So if the 32-bit Explorer is running on a 64-bit system, it’s because some other program is using it to do some dirty work. ↫ Raymond Chen at The Old New Thing So I had no idea that 64bit Windows included a copy of the 32bit Explorer for backwards compatibility. It obviously makes sense, but I just never stopped to think about it. This made me wonder though if you could go nuts and do something really dumb: could you somehow trick 64bit Windows into running this 32bit copy of Explorer as its shell? Youd be running 32bit Explorer on 64bit Windows using the 32bit WoW64 binaries where you just pulled the 32bit Explorer binary from, which seems like a really nonsensical thing to do. Since theres no longer any 32bit builds of Windows 11, you also cant just copy over the 32bit Explorer from a 32bit Windows 11 build and achieve the same goal that way, so youd really have to go digging around in WoW64 to get 32bit versions. I guess the answer to this question depends on just how complete this copy of 32bit Explorer really is, and if Windows has any defenses or triggers in place to prevent someone from doing something this uselessly stupid. Of course, theres no practical reason to do any of this and it makes very little sense, but it might be a fun hacking project. Most likely the Windows experts among you are wondering what kind of utterly deranged new designer drug Im on, but I was always told that sometimes, the dumbest questions can lead to the most interesting answers, so here we are.


Linux Journal - The Original Magazine of the Linux Community

  • Canonical Unveils Ubuntu AI Strategy: Local Models, User Control, and Smarter Workflows
    by George Whittaker
    Canonical has officially revealed its long-anticipated plans to bring artificial intelligence features into Ubuntu, marking a significant shift for one of the world’s most widely used Linux distributions. Rather than rushing into the AI wave, Canonical is taking a measured, privacy-focused approach, one that aims to enhance the operating system without compromising its open-source values.

    The rollout is expected to take place gradually throughout 2026, with early features likely appearing in upcoming Ubuntu releases.
    A Gradual, Thoughtful AI Rollout
    Canonical isn’t positioning Ubuntu as an “AI-first” operating system. Instead, the company is introducing AI in stages, focusing on practical improvements rather than hype-driven features.

    The plan follows a two-phase model:
    Implicit AI features: Enhancements running quietly in the background Explicit AI features: User-facing tools and workflows powered by AI
    This approach allows Ubuntu to evolve naturally, improving existing functionality before introducing more advanced capabilities.
    Local AI First, Not the Cloud
    One of the most important aspects of Canonical’s strategy is its emphasis on local AI processing, also known as on-device inference.

    Instead of sending data to remote servers, Ubuntu will aim to:
    Run AI models directly on the user’s hardware Reduce reliance on cloud services Improve privacy and performance
    Canonical has made it clear that local inference will be the default, with cloud-based options available only when explicitly chosen by the user.

    This aligns closely with the privacy expectations of Linux users, who often prefer greater control over their data.
    What AI Features Could Look Like
    Canonical has outlined several potential use cases for AI inside Ubuntu. These include:
    Accessibility Improvements
    AI will enhance tools like:
    Speech-to-text Text-to-speech Assistive technologies
    These features aim to make Ubuntu more inclusive and easier to use for a wider range of users.
    Smarter System Assistance
    Future AI features may help users:
    Troubleshoot system issues Interpret logs and error messages Automate repetitive tasks
    This could significantly lower the learning curve for new Linux users.
    Agent-Based Automation
    Canonical is also exploring “agentic” AI workflows, where AI can take actions on behalf of the user.

    Examples include:
    Go to Full Article


  • Thunderbird 150 Lands on Linux: Smarter Encryption, Better Tools, and a Polished Experience
    by George Whittaker
    Mozilla has officially rolled out Thunderbird 150.0, the latest version of its open-source email client, bringing a mix of security-focused enhancements, usability upgrades, and workflow improvements for Linux and other platforms. Released in April 2026, this update continues Thunderbird’s steady evolution as a powerful desktop email solution.

    For Linux users, Thunderbird 150 delivers meaningful updates that improve both everyday usability and advanced email handling, especially for encrypted communication.
    Stronger Support for Encrypted Email
    One of the standout improvements in Thunderbird 150 is how it handles encrypted messages.

    Users can now:
    Search inside encrypted emails (OpenPGP and S/MIME) Generate “unobtrusive” OpenPGP signatures that appear cleaner to recipients
    These changes make encrypted communication far more practical, especially for users who rely on secure email for work or privacy-sensitive tasks.
    New Productivity and Workflow Features
    Thunderbird 150 introduces several small but impactful workflow improvements:
    A new Account Hub opens automatically on first launch, simplifying setup Recent Destinations in settings can now be sorted alphabetically Address book entries can be copied as vCard files A new custom accent color option allows interface personalization
    These updates make Thunderbird easier to configure and more flexible to use daily.
    Improved Built-In PDF Viewer
    Thunderbird’s integrated PDF viewer gets a useful upgrade: users can now reorder pages directly within the viewer.

    This is particularly helpful for:
    Managing attachments without external tools Editing documents quickly before sending Streamlining email-based workflows
    Combined with ongoing security fixes, the PDF viewer becomes both more capable and safer.
    Calendar and Interface Enhancements
    Several improvements focus on usability and accessibility:
    Calendar views now support touchscreen scrolling Fixed issues with calendar layouts and navigation Better screen reader support and accessibility fixes General UI refinements across the application
    These changes contribute to a smoother, more consistent user experience across devices.
    Bug Fixes and Stability Improvements
    Thunderbird 150 also resolves a wide range of issues, including:
    Go to Full Article


  • Linux Kernel 6.19 Reaches End of Life: Time to Move Forward
    by George Whittaker
    The Linux kernel continues its fast-paced release cycle, and with that comes an important milestone: Linux kernel 6.19 has officially reached end of life (EOL). For users and distributions still running this branch, it’s now time to upgrade to a newer kernel version.

    This isn’t unexpected, Linux 6.19 was never intended to be a long-term release, but it does serve as a reminder of how quickly non-LTS kernel branches move through their lifecycle.
    Official End of Support
    The final update in the 6.19 series, Linux 6.19.14, has been released and marked as the last maintenance version. Kernel maintainer Greg Kroah-Hartman confirmed that no further updates will follow, stating that the branch is now officially end-of-life.

    On kernel.org, the 6.19 series is now listed as EOL, meaning it will no longer receive bug fixes or security patches.
    Why 6.19 Had a Short Lifespan
    Unlike some kernel releases, Linux 6.19 was not a long-term support (LTS) version. Short-lived kernel branches are typically supported for only a few months before being replaced by newer releases.

    Linux follows a rapid development model:
    New major versions are released frequently Short-term branches receive limited updates Only selected kernels are designated as LTS for extended support
    Because of this, 6.19 was always meant to be a stepping stone rather than a long-term foundation.
    What Users Should Do Now
    With 6.19 no longer maintained, continuing to use it poses risks, especially in environments where security and stability matter.

    Recommended upgrade paths include:
    Upgrade to Linux 7.0
    The most direct path forward is the Linux 7.0 kernel series, which succeeds 6.19 and introduces new hardware support and ongoing fixes.

    This is a good option for:
    Desktop users Rolling-release distributions Users who want the latest featuresSwitch to an LTS Kernel
    For production systems, servers, or long-term stability, moving to an LTS kernel is often the better choice.

    Current LTS options include:
    Linux 6.18 LTS (supported until 2028) Linux 6.12 LTS (supported until 2028) Linux 6.6 LTS (supported until 2027)
    These versions receive ongoing security updates and are better suited for stable environments.
    Why EOL Matters
    When a kernel reaches end of life:
    Go to Full Article


  • Archinstall 4.2 Shifts to Wayland-First Profiles, Leaving X.Org Behind
    by George Whittaker
    The Arch Linux installer continues evolving alongside the broader Linux desktop ecosystem. With the release of Archinstall 4.2, a notable change has arrived: Wayland is now the default focus for graphical installation profiles, while traditional X.Org-based profiles have been removed or deprioritized.

    This move reflects a wider transition happening across Linux, one that is gradually redefining how graphical environments are built and used.
    A Turning Point for Archinstall
    Archinstall, the official guided installer for Arch Linux, has steadily improved over time to make installation more accessible while still maintaining Arch’s minimalist philosophy.

    With version 4.2, the installer now aligns more closely with modern desktop trends by emphasizing Wayland-based environments during setup, instead of offering traditional X.Org configurations as first-class options.

    This doesn’t mean X.Org is completely gone from Arch Linux, but it does signal a clear shift in direction.
    Why Wayland Is Taking Over
    Wayland has been gaining traction for years as the successor to X.Org, offering a more streamlined and secure approach to rendering graphics on Linux.

    Compared to X.Org, Wayland is designed to:
    Reduce complexity in the graphics stack Improve security by isolating applications Deliver smoother rendering and better performance Support modern display technologies like high-DPI and variable refresh rates
    As the Linux ecosystem evolves, many distributions and desktop environments are prioritizing Wayland as the default display protocol.
    What Changed in Archinstall 4.2
    With this release, users installing Arch through Archinstall will notice:
    Wayland-based desktop environments and compositors are now the primary options X.Org-centric setups are no longer emphasized in guided profiles Installation workflows better reflect modern Linux defaults
    This simplifies the installation experience for new users, who no longer need to choose between legacy and modern display systems during setup.
    What About X.Org?
    While Archinstall is moving forward, X.Org itself is not disappearing overnight.

    Many applications and workflows still rely on X11, and compatibility is maintained through XWayland, which allows X11 applications to run within Wayland sessions.

    For advanced users, Arch still provides full flexibility:
    Go to Full Article


  • OpenClaw in 2026: What It Is, Who’s Using It, and Whether Your Business Should Adopt It
    by George Whittaker
    “probably the single most important release of software, probably ever.”

    — Jensen Huang, CEO of NVIDIA


    Wow! That’s a bold statement from one of the most influential figures in modern computing.

    But is it true? Some people think so. Others think it’s hype. Most are somewhere in between, aware of OpenClaw, but not entirely sure what to make of it. Are people actually using it? Yes. Who’s using it? More than you might expect. Is it experimental, or is it already changing how work gets done? That depends on how it’s being applied. Is it more relevant for businesses or consumers right now? That’s one of the most important, and most misunderstood, questions.

    This article breaks that down clearly: what OpenClaw is, how it works, who is using it today, and where it actually creates value.

    What makes OpenClaw different isn’t just the technology, it’s where it fits. Most of the AI tools people are familiar with still require a human to take the next step. They assist, but they don’t execute. OpenClaw changes that dynamic by connecting decision-making directly to action. Once you understand that shift, the rest of the discussion, who’s using it, how it’s being deployed, and where it creates value, starts to make a lot more sense.


    Top 10 Questions About OpenClaw
    What is OpenClaw?

    OpenClaw is an open-source AI agent framework that enables large language models like Claude, GPT, and Gemini to execute real-world tasks across software systems, including APIs, files, and workflows.

    What does OpenClaw actually do?

    OpenClaw functions as an execution layer that allows AI systems to take actions, such as sending emails, updating CRM records, or running scripts, instead of only generating responses.

    Do you need to be a developer to use OpenClaw?

    No, but technical familiarity helps. Non-developers can use prebuilt workflows, while developers can customize and scale implementations more effectively.

    Is OpenClaw more suited for business or consumer use?

    OpenClaw is currently more suited for business and technical use cases where structured workflows exist. Consumer use is emerging but remains secondary.

    How is OpenClaw different from ChatGPT or Claude?

    ChatGPT and Claude generate outputs, while OpenClaw enables those outputs to trigger actions across connected systems.

    Who created OpenClaw?
    Go to Full Article


  • Linux Kernel Developers Adopt New Fuzzing Tools
    by George Whittaker
    The Linux kernel development community is stepping up its security game once again. Developers, led by key maintainers like Greg Kroah-Hartman, are actively adopting new fuzzing tools to uncover bugs earlier and improve overall kernel reliability.

    This move reflects a broader shift toward automated testing and AI-assisted development, as the kernel continues to grow in complexity and scale.
    What Is Fuzzing and Why It Matters
    Fuzzing is a software testing technique that feeds random or unexpected inputs into a program to trigger crashes or uncover vulnerabilities.

    In the Linux kernel, fuzzing has become one of the most effective ways to detect:
    Memory corruption bugs Race conditions Privilege escalation flaws Edge-case failures in subsystems
    Modern fuzzers like Syzkaller have already discovered thousands of kernel bugs over the years, making them a cornerstone of Linux security testing.
    New Tools Enter the Scene
    Recently, kernel maintainers have begun experimenting with new fuzzing frameworks and tooling, including a project internally referred to as “clanker”, which has already been used to identify multiple issues across different kernel subsystems.

    Early testing has uncovered bugs in areas such as:
    SMB/KSMBD networking code USB and HID subsystems Filesystems like F2FS Wireless and device drivers
    The speed at which these issues were discovered suggests that these new tools are significantly improving bug detection efficiency.
    AI and Smarter Fuzzing Techniques
    One of the most interesting developments is the growing role of AI and machine learning in fuzzing.

    New research projects like KernelGPT use large language models to:
    Automatically generate system call sequences Improve test coverage Discover previously hidden execution paths
    These techniques can enhance traditional fuzzers by making them smarter about how they explore the kernel’s behavior.

    Other advancements include:
    Better crash analysis and deduplication tools (like ECHO) Configuration-aware fuzzing to explore deeper kernel states Feedback-driven fuzzing loops for improved coverage
    Together, these innovations help developers focus on the most meaningful bugs rather than sifting through duplicate reports.
    Why This Shift Is Happening Now
    The Linux kernel is one of the most complex software projects in existence. With millions of lines of code and contributions from thousands of developers, manually catching every bug is nearly impossible.
    Go to Full Article


  • GNOME 50 Reaches Arch Linux: A Leaner, Wayland-Only Future Arrives
    by George Whittaker
    Arch Linux users are among the first to experience the latest GNOME desktop, as GNOME 50 has begun rolling out through Arch’s repositories. Thanks to Arch’s rolling-release model, new upstream software like GNOME arrives quickly, giving users early access to the newest features and architectural changes.

    With GNOME 50, that includes one of the most significant shifts in the desktop’s history.
    A Major GNOME Milestone
    GNOME 50, officially released in March 2026 under the codename “Tokyo,” represents six months of development and refinement from the GNOME community.

    Unlike some previous versions, this release focuses less on dramatic redesigns and more on strengthening the foundation of the desktop, improving performance, modernizing graphics handling, and simplifying long-standing complexities.

    For Arch Linux users, that translates into a more streamlined and future-ready desktop environment.
    Goodbye X11, Hello Wayland-Only Desktop
    The headline change in GNOME 50 is the complete removal of X11 support from GNOME Shell and its window manager, Mutter.

    After years of gradual transition:
    X11 sessions were first deprecated Then disabled by default And now fully removed in GNOME 50
    This means GNOME now runs exclusively on Wayland, with legacy X11 applications handled through XWayland compatibility layers.

    The result is a simpler, more modern graphics stack that reduces maintenance overhead and improves long-term performance and security.
    Improved Graphics and Display Handling
    GNOME 50 brings several key improvements to display and graphics performance:
    Variable Refresh Rate (VRR) enabled by default Better fractional scaling support Improved compatibility with NVIDIA drivers Enhanced HDR and color management
    These changes aim to deliver smoother animations, more responsive desktops, and better support for modern displays.

    For gamers and users with high-refresh monitors, these upgrades are especially noticeable.
    Performance and Responsiveness Gains
    Beyond graphics, GNOME 50 includes multiple performance optimizations:
    Faster file handling in the Files (Nautilus) app Improved thumbnail generation Reduced stuttering in animations Better resource usage across the desktop
    These refinements make the desktop feel more responsive, particularly on systems with demanding workloads or multiple monitors.
    New Parental Controls and Accessibility Features
    GNOME 50 also expands its focus on usability and accessibility.
    Go to Full Article


  • MX Linux Pushes Back Against Age Verification: A Stand for Privacy and Open Source Principles
    by George Whittaker
    The MX Linux project has taken a firm stance in a growing controversy across the Linux ecosystem: mandatory age-verification requirements at the operating system level. In a recent update, the team made it clear, they have no intention of implementing such measures, citing concerns over privacy, practicality, and the core philosophy of open-source software.

    As governments begin introducing laws that could require operating systems to collect user age data, MX Linux is joining a group of projects resisting the shift.
    What Sparked the Debate?
    The discussion around age verification stems from new legislation, particularly in regions like the United States and Brazil, that aims to protect minors online. These laws may require operating systems to:
    Collect user age or date of birth during setup Provide age-related data to applications Enable content filtering based on age categories
    At the same time, underlying Linux components such as systemd have already begun exploring technical changes, including storing birthdate fields in user records to support such requirements.
    MX Linux Says “No” to Age Verification
    In response, the MX Linux team has clearly rejected the idea of integrating age verification into their distribution. Their reasoning is rooted in several key concerns:
    User privacy: Collecting age data introduces sensitive personal information into systems that traditionally avoid such tracking Feasibility: Implementing consistent, secure age verification across a decentralized OS ecosystem is highly complex Philosophy: Open-source operating systems are not designed to act as data collectors or gatekeepers
    The developers emphasized that they do not want to burden users with intrusive requirements and instead encouraged concerned individuals to direct their efforts toward policymakers rather than Linux projects.
    A Broader Resistance in the Linux Community
    MX Linux is not alone. The Linux world is divided on how, or whether, to respond to these regulations.

    Some projects are exploring compliance, while others are pushing back entirely. In fact, age verification laws have sparked:
    Strong debate among developers and maintainers Concerns about enforceability on open-source platforms New projects explicitly created to resist such requirements
    In some extreme cases, distributions have even restricted access in certain regions to avoid legal complications.
    Why This Matters
    At its core, this issue goes beyond a single feature, it raises fundamental questions about what an operating system should be.

    Linux has long stood for:
    Go to Full Article


  • LibreOffice Drives Europe’s Open Source Shift: A Growing Push for Digital Sovereignty
    by George Whittaker
    LibreOffice is increasingly at the center of Europe’s push toward open-source adoption and digital independence. Backed by The Document Foundation, the widely used office suite is playing a key role in helping governments, institutions, and organizations reduce reliance on proprietary software while strengthening control over their digital infrastructure.

    Across the European Union, this shift is no longer experimental, it’s becoming policy.
    A Broader Movement Toward Open Source
    Europe has been steadily moving toward open-source technologies for years, but recent developments show clear acceleration. Governments and public institutions are actively transitioning away from proprietary platforms, often citing concerns about vendor lock-in, cost, and data control.

    According to recent industry data, European organizations are adopting open source faster than their U.S. counterparts, with vendor lock-in concerns cited as a major driver.

    LibreOffice sits at the center of this trend as a mature, fully open-source alternative to traditional office suites.
    LibreOffice as a Strategic Tool
    LibreOffice isn’t just another productivity application, it has become a strategic component in Europe’s digital policy framework.

    The software:
    Is fully open source and community-driven Supports open standards like OpenDocument Format (ODF) Allows governments to avoid dependency on specific vendors Enables long-term control over data and infrastructure
    These characteristics align closely with the European Union’s broader strategy to promote interoperability and transparency through open standards.
    Government Adoption Across Europe
    LibreOffice adoption is already happening at scale across multiple countries and sectors.

    Examples include:
    Germany (Schleswig-Holstein): transitioning tens of thousands of government systems to Linux and LibreOffice Denmark: replacing Microsoft Office in public institutions as part of a broader digital sovereignty initiative France and Italy: deploying LibreOffice across ministries and defense organizations Spain and local governments: adopting LibreOffice to standardize workflows and reduce costs
    In some cases, migrations involve hundreds of thousands of systems, demonstrating that open-source office software is viable at national scale.
    Go to Full Article


  • From Linux to Blockchain: The Infrastructure Behind Modern Financial Systems
    by George Whittaker
    The modern internet is built on open systems. From the Linux kernel powering servers worldwide to the protocols that govern data exchange, much of today’s digital infrastructure is rooted in transparency, collaboration, and decentralization. These same principles are now influencing a new frontier: financial systems built on blockchain technology.

    For developers and system architects familiar with Linux and open-source ecosystems, the rise of cryptocurrency is not just a financial trend, it is an extension of ideas that have been evolving for decades.
    Open-Source Foundations and Financial Innovation
    Linux has long demonstrated the power of decentralized development. Instead of relying on a single authority, it thrives through distributed contributions, peer review, and community-driven improvement.

    Blockchain technology follows a similar model. Networks like Bitcoin operate on open protocols, where consensus is achieved through distributed nodes rather than centralized control. Every transaction is verified, recorded, and made transparent through cryptographic mechanisms.

    For those who have spent years working within Linux environments, this architecture feels familiar. It reflects a shift away from trust-based systems toward verification-based systems.
    Understanding the Stack: Nodes, Protocols, and Interfaces
    At a technical level, cryptocurrency systems are composed of multiple layers. Full nodes maintain the blockchain, validating transactions and ensuring network integrity. Lightweight clients provide access to users without requiring full data replication. On top of this, exchanges and platforms act as interfaces that connect users to the underlying network.

    For developers, interacting with these systems often involves APIs, command-line tools, and automation scripts, tools that are already integral to Linux workflows. Managing wallets, verifying transactions, and monitoring network activity can all be integrated into existing development environments.
    Go to Full Article


Page last modified on November 02, 2011, at 10:01 PM