Recent Changes - Search:
NTLUG

Linux is free.
Life is good.

Linux Training
10am on Meeting Days!

1825 Monetary Lane Suite #104 Carrollton, TX

Do a presentation at NTLUG.

What is the Linux Installation Project?

Real companies using Linux!

Not just for business anymore.

Providing ready to run platforms on Linux

Show Descriptions... (Show All) (Two Column)

LinuxSecurity - Security Advisories

  • Debian Bookworm Corosync Critical DoS Memory Disclosure DSA-6261-1
    Two security vulnerabilities were discovered in the Corosync cluster engine, which could result in denial of service or memory disclosure. For the oldstable distribution (bookworm), these problems have been fixed in version 3.1.7-1+deb12u2. For the stable distribution (trixie), these problems have been fixed in


  • Debian Bookworm Tor Important DoS Issues DSA-6260-1 CVE-2026-44597
    Multiple security vulnerabilities were discovered in Tor, a connection- based low-latency anonymous communication system, which could result in denial of service. For the oldstable distribution (bookworm), these problems have been fixed in version 0.4.9.8-0+deb12u1.






LWN.net

  • More stable kernels with partial Dirty Frag fixes
    Greg Kroah-Hartman has released the 6.1.171, 5.15.205, and 5.10.255 stable kernels, quicklyfollowed by 6.1.172 and 5.15.206 kernels. This is another roundof stable kernels to provide fixes for one of the CVEs (CVE-2026-43284)assigned following the DirtyFrag and Copy Fail 2security disclosures. There is not, yet, a stable kernel with a fixfor CVE-2026-43500,though apatch to fix the second half is in the works.



  • [$] Forgejo "carrot disclosure" raises security questions
    An unusual, some might say hostile, approach to disclosing an allegedremote-code-execution (RCE) flaw in the Forgejo software-collaboration platform hassparked a multifaceted conversation. A so-called"carrot disclosure" in April has raised questions about theresearcher's methods of unveiling a security problem, Forgejo'ssecurity policies, and the project's overall security posture.


  • killswitch for short-term emergency vulnerability mitigation
    It seems that we are in for an extended period of the disclosure ofvulnerabilities before fixes become available. One possible way of copingwith this flood might be the killswitchproposal from Sasha Levin. In short, killswitch can immediately disableaccess to specific functionality in a running kernel, essentially blastinga vulnerable path (and its associated functionality) out of existence untila fix can be installed. "For most users, the cost of 'this socketfamily stops working for the day' is much smaller than the cost of runninga known vulnerable kernel until the fix land."


  • [$] A 2026 DAMON update
    The kernel's DAMON subsystemprovides user-space monitoring and management of system memory. DAMON isdeveloping rapidly, so an update on its progress has become a regularfeature of the annual Linux Storage,Filesystem, Memory Management, and BPF Summit. This traditioncontinued at the 2026 gathering with an update from DAMON creator SeongJaePark covering a long list of new capabilities — tiering, data attributesmonitoring, transparent huge pages, and more — being added to this subsystem.


  • Security updates for Friday
    Security updates have been issued by AlmaLinux (libsoup and mingw-libtiff), Debian (apache2, chromium, lcms2, libreoffice, and prosody), Fedora (openssl and perl-Starman), Oracle (git-lfs, libsoup, and perl-XML-Parser), Slackware (libgpg, mozilla, and php), SUSE (389-ds, cairo, cf-cli, chromedriver, cri-tools, freeipmi, gnutls, grafana, java-11-openjdk, java-17-openjdk, jetty-minimal, libmariadbd-devel, librsvg, mesa, mozjs52, mutt, nix, opencryptoki, python-Django, python-django, python-pytest, rmt-server, thunderbird, traefik, webkit2gtk3, wireshark, and xen), and Ubuntu (civicrm, dpkg, htmlunit, lcms2, libpng1.6, linux, linux-*, linux-azure, linux-azure-fips, linux-raspi, linux-xilinx, lua5.1, nasm, opam, openexr, openjpeg2, owslib, postfix, postfixadmin, and vim).


  • Four stable kernels with partial fixes for Dirty Frag
    Greg Kroah-Hartman has announced the release of the 7.0.5, 6.18.28, 6.12.87, and 6.6.138 stable kernels. These kernelscontain a partial fix for the DirtyFrag and Copy Fail 2security flaws. Kroah-Hartman has confirmedthat a second patch is required, but it is still in development and has not yet been merged.



  • Dirty Frag: a zero-day universal Linux LPE
    Hyunwoo Kim has announcedthe DirtyFrag security flaw, alocal-privilege-escalation (LPE) vulnerability similar to therecently disclosed Copy Failflaw:

    Because the embargo has now been broken, no patches or CVEs exist for these vulnerabilities. After consultation with the linux-distros@vs.openwall.orgmaintainers, and at the maintainers' request, I am publicly releasing this Dirty Frag document.

    As with the previous Copy Fail vulnerability, Dirty Frag likewise allows immediate root privilege escalation on all major distributions.

    Kim, who discovered the flaw and had attempted a coordinateddisclosure set for May 12, has released the code for an exploit, as well as a examplescript to remove the vulnerable modules. A fullwrite-up, with the disclosure timeline, is also available. It'sunknown at this time whether this is an example of parallel discoveryor how the third party was able to disclose it prior to the end of theembargo. We will be following up as more information comes to light.


  • [$] A new era for memory-management maintainership
    On April 21, Andrew Morton letit be known that he intends to begin stepping away from themaintainership of kernel's memory-management subsystem — a responsibilityhe has carried since before memory management was even seen as its ownsubsystem. At the 2026 Linux Storage, Filesystem, Memory Management, andBPF Summit, one of the first sessions in the memory-management track wasdevoted to how the maintainership would be managed going forward. Thereare a lot of questions still to be answered.


  • An update on KDE's Union style engine
    Arjen Hiemstra has publishedan article on the status of the Union project: asingle system to support all of KDE's technologies used for stylingapplications.

    The work on Union's Breeze implementation has progressed to thepoint where it is very hard to distinguish whether or not you arerunning the Union version. We have also tested with a bunch ofapplications and made sure that any differences were fixed. So we areat a stage where we need to get Union into the hands of more people,both to get extra people testing whether there are any major issues,but also to have interested people creating new styles.

    This means that with the upcoming Plasma 6.7 release, we plan toinclude Union. Discussion is currently ongoing whether we will enableit by default, but even if not there will be a way to try it out.

    See Hiemstra's introductoryarticle on Union, published in February 2025, for more about theproject and its creation. KDE 6.7 is expected to be released in mid-June.



  • Security updates for Thursday
    Security updates have been issued by AlmaLinux (dovecot, fence-agents, freeipmi, git-lfs, image-builder, kernel, libsoup, osbuild-composer, and python-tornado), Debian (apache2, libdatetime-timezone-perl, lrzip, tzdata, and wireshark), Fedora (dovecot, forgejo-runner, gh, gnutls, krb5, nano, pdns, pyOpenSSL, squid, vim, and xorg-x11-server-Xwayland), Mageia (graphicsmagick, kernel-linus, krb5-appl, libexif, libtiff, nano, nginx, ntfs-3g, opam, perl-Net-CIDR-Lite, perl-Starlet, perl-Starman, tcpflow, and virtualbox), Oracle (dovecot, fence-agents, freeipmi, image-builder, kernel, libcap, LibRaw, libsoup, openssh, osbuild-composer, python, python-tornado, python3, systemd, thunderbird, and tigervnc), SUSE (containerd, curl, erlang, flatpak, java-11-openjdk, java-21-openjdk, java-25-openjdk, liblxc-devel, libpng12, libthrift-0_23_0, openCryptoki, openexr, openssl-3, python3, python311-social-auth-core, rclone, skim, and thunderbird), and Ubuntu (apache2, coin3, editorconfig-core, insighttoolkit, linux, linux-aws, linux-aws-6.17, linux-gcp, linux-gcp-6.17, linux-hwe-6.17, linux-oracle, linux-realtime, linux-realtime-6.17, linux-azure, linux-azure-6.17, linux-oem-6.17, linux-azure-5.15, linux-gcp-6.8, nghttp2, python-dynaconf, slurm-wlm, swish-e, and webkit2gtk).



  • [$] LWN.net Weekly Edition for May 7, 2026
    Inside this week's LWN.net Weekly Edition:
    Front: LLMs and security; restartable sequences and TCMalloc; Fedora and GNOME bug reports; Prolly trees; Arm on s390. Briefs: NHS open source; Alpine outage; GCC 16.1; Incus 7.0 LTS; NetHack 5.0.0; PHP license; Quotes; ... Announcements: Newsletters, conferences, security updates, patches, and more.


  • [$] LLM-driven security reports disrupt coordinated disclosure
    Predictions that LLM tools would cause a surge in reports of security vulnerabilitieshave, unquestionably, borne out. As expected, maintainers are having to wadethrough more security reports than ever before; in addition, LLM tools aredisrupting traditional-coordinated disclosure practices as well. The method of Copy Fail's disclosure, in particular, leftvendors, projects, and users scrambling. In addition, maintainers are seeingparallel discovery of the same security flaws within the embargo window. Bothof these developments mean that coordinated security disclosures may become athing of the past.


  • Incus 7.0 LTS released
    Version7.0 of the Incus container andvirtual-machine management system has been released. Notable changes in thisrelease include the inclusion of a low-level backup API, the additionof basic S3 operations directly in Incus to replace the now-unmaintainedMinIO project, as well as the removal of support forcgroups v1 and xtables (iptables/ip6tables/ebtables). This is along-term-support (LTS) release, with support through June 2031.

    The first 2 years will feature bug and security fixes as well as minorusability improvements, delivered through occasional point releases(7.0.x). After that initial two years, Incus 7.0 LTS will move to security onlymaintenance for the remaining of its 5 years of support.

    A total of 204 individuals contributed to Incus between the 6.0 LTS and 7.0LTS releases with 45 contributing between the 6.23 and 7.0 LTS releases.


  • Security updates for Wednesday
    Security updates have been issued by AlmaLinux (corosync, dovecot, image-builder, python-tornado, resource-agents, and systemd), Debian (openjdk-11, openjdk-17, and pyjwt), Fedora (pdns, pyOpenSSL, and squid), Slackware (hunspell), SUSE (alloy, avahi, bubblewrap, cmctl, coredns, curl, dpkg, firefox, golang-github-prometheus-prometheus, grafana, libpng12, PackageKit, sed, and xen), and Ubuntu (docker.io-app, nghttp2, python-django, and python-mako).


LXer Linux News


  • SpacemiT K3 integrates 8-core RISC-V CPU cluster and 60 TOPS AI engine
    SpacemiT’s Key Stone K3 is a high-performance RISC-V SoC designed for AI and edge computing applications. The processor combines eight X100 64-bit RISC-V CPU cores with eight A100 AI-oriented compute cores, along with multimedia, networking, and high-speed I/O support targeting edge and embedded AI workloads. The CPU subsystem integrates eight X100 RISC-V cores operating at […]


  • Nocturne Is The Latest Music Player For GNOME To Hit v1.0
    While since GNOME 48 Decibels is the new audio player of the GNOME desktop, there is no shortage of other GNOME/GTK-aligned music players. Last month was the big Amerbol music player update and there are Lollypop and others. The latest GNOME-aligned music player now hitting the 1.0 milestone is Nocturne...







  • FEX 2605 Brings Performance Improvements, Initial Snapdragon X2 Elite Fixes
    FEX 2605 is out this weekend as the newest monthly feature release to this emulator for running Linux x86_64 binaries on ARM64 (AArch64) devices. This is the open-source project sponsored by Valve and planned for use with the upcoming Steam Frame as well as being relevant to Linux gaming on other 64-bit ARM laptops and other devices...



  • HP Z6 G5 A Continues Working Out Well For Linux-Friendly, High-End Workstation
    In late 2023 I reviewed the HP Z6 G5 A workstation that at the time was built around the AMD Ryzen Threadripper PRO 7000 series and NVIDIA RTX Ada Generation graphics. More recently, HP has revised the Z6 G5 A workstation for the latest Threadripper PRO 9000 series and NVIDIA RTX PRO Blackwell graphics. HP sent over the upgraded Z6 G5 A workstation that I've been benchmarking the past few weeks. This workstation remains Linux-friendly down to convenient LVFS/Fwupd support and delivers stellar performance with the Zen 5 Threadripper and NVIDIA Blackwell combination.







  • IOT-GATE-RPI5 is a Fanless Raspberry Pi CM5 Gateway with RS485 and CAN-FD
    CompuLab has unveiled the IOT-GATE-RPI5, an industrial IoT edge gateway built around the Raspberry Pi Compute Module 5. The system combines the BCM2712 quad-core Cortex-A76 processor with industrial interfaces, optional cellular connectivity, and support for wide operating temperatures. The gateway is based on the Broadcom BCM2712 processor with four Cortex-A76 cores clocked at 2.4GHz, paired […]



  • AMD's Local, Open-Source AI Can Now Easily Interact With Your Gmail
    AMD software engineers continue rapidly advancing their open-source software efforts around local AI/LLM use on consumer-class Radeon and Ryzen hardware. AMD GAIA 0.17.6 was released on Thursday with more improvements for local AI processing on Windows, Linux, and even macOS. For those trusting enough in local LLM pipelines to do the right thing, there is even integration now for AMD GAIA to interface with your Gmail account...



Error: It's not possible to reach RSS file http://services.digg.com/2.0/story.getTopNews?type=rss&topic=technology ...

Slashdot

  • GM Secretly Sold California Drivers' Data, Agrees to Pay $12.75M In Privacy Settlement
    "General Motors sold the data of California drivers without their knowledge or consent," says California's attorney general, "and despite numerous statements reassuring drivers that it would not do so." In 2024, The New York Times "reported that automakers including GM were sharing information about their customers' driving behavior with insurance companies," remembers TechCrunch, "and that some customers were concerned that their insurance rates had gone up as a result." Now General Motors "has reached a privacy-related settlement with a group of law enforcement agencies led by California Attorney General Rob Bonta..."The settlement announcement from Bonta's office similarly alleges that GM sold "the names, contact information, geolocation data, and driving behavior data of hundreds of thousands of Californians" to Verisk Analytics and LexisNexis Risk Solutions, which are both data brokers. Bonta's office further alleges that this data was collected through GM's OnStar program, and that the company made roughly $20 million from data sales. However, Bonta's office also said the data did not lead to increased insurance prices in California, "likely because under California's insurance laws, insurers are prohibited from using driving data to set insurance rates."As part of the settlement, GM has agreed to pay $12.75 million in civil penalties and to stop selling driving data to any consumer reporting agencies for five years, Bonta's office said. GM has also agreed to delete any driver data that it still retains within 180 days (unless it obtains consent from customers), and to request that Lexis and Verisk delete that data. "This trove of information included precise and personal location data that could identify the everyday habits and movements of Californians," according to the attorney general's announcement. The settlement "requires General Motors to abandon these illegal practices, and underscores the importance of the data minimization in California's privacy law — companies can't just hold on to data and use it later for another purpose." "Modern cars are rolling data collection machines," said San Francisco District Attorney Brooke Jenkins. "Californians must have confidence that they know what data is being collected, how it is being used, and what their opt-out rights are... This case sends a strong message that law enforcement will take action when California privacy laws are not scrupulously followed."


    Read more of this story at Slashdot.


  • Amazon Relents, Lets its Programmers Use OpenAI's Codex and Anthropic's Claude
    An anonymous reader shared this report from Futurism:In November, Amazon leaders sent an internal memo to employees, pushing them to use its in-house code generating tool, Kiro, over third-party alternatives from competitors. "While we continue to support existing tools in use today, we do not plan to support additional third party, AI development tools," the memo read, as quoted by Reuters at the time. "As part of our builder community, you all play a critical role shaping these products and we use your feedback to aggressively improve them." It was an unusual development, considering the tens of billions of dollars the e-commerce giant has invested in its competitors in the space, including Anthropic and OpenAI... Half a year later, Amazon is singing a dramatically different tune. As Business Insider reports, Amazon is officially throwing in the towel, succumbing to growing calls among employees for access to OpenAI's Codex and Anthropic's Claude... Given the unfortunate optics of opening the floodgates for Codex and Claude Code, an Amazon spokesperson told the publication in a statement that teams are still "primarily using" Kiro, claiming that 83 percent of engineers at the company are leaning on it.


    Read more of this story at Slashdot.


  • Rocket Lab Reports Growing Demand for Commercial Space Products. Stock Surges 34%
    For just the first three months of 2026, Rocket Lab's launch business reports $63.7 million in revenue, reports CNBC — plus another $136.7 million from its space systems business. Besides beating Wall Street's expectations, Rocket Lab also announced that its backlog has more than doubled from a year ago to $2.2 billion, and that it's buying space robotics company Motiv Space Systems. Friday its stock price shot up 34% in one day...Rocket Lab's stock has more than quadrupled over the past year, benefiting from skyrocketing demand for businesses tied to the space economy ahead of SpaceX's hotly anticipated IPO later this year. Demand for space systems and satellites is also escalating as President Donald Trump pursues his ambitious Golden Dome missile defense project and NASA's crewed Artemis missions rev up. Rocket Lab said Thursday that it signed its largest contract ever with a confidential customer for its Neutron and Electron rockets through 2029, weeks after landing a $190 million deal for 20 hypersonic test flights... "The demand signal is clear," CEO Peter Beck said on an earnings call with analysts, calling the pace of new product releases from the company this year "relentless".... Rocket Lab's good news lifted other space companies. Firefly Aeropspace and Intuitive Machines both jumped more than 20, while Redwire gained 19%. Voyager Technologies rose 14%. "The company anticipates revenue between $225 million and $240 million during the second quarter."


    Read more of this story at Slashdot.


  • Unemployment Ticked Up in America's IT Sector
    IT sector unemployment "increased to 3.8% in April from 3.6% in March," reports the Wall Street Journal. But they add that the increase reflects "an ongoing uncertainty in tech as AI continues to play havoc with hiring. That's according to analysis from consulting firm Janco Associates, which bases its findings on data from the U.S. Labor Department."On Friday, the department said the economy added 115,000 jobs, buoyed by gains in industries including retail, transportation and warehousing and healthcare. The unemployment rate was unchanged at 4.3%. But the information sector lost 13,000 jobs in April. While it's still too early to say exactly how AI is affecting employment overall, some businesses, especially in the tech industry, have said it's part of the reason they're cutting staff. In April, Meta Platforms said it would lay off 10% of its staff, or roughly 8,000 people, as it seeks to streamline operations and pay for its own massive investments in AI. Nike will reduce its workforce by roughly 1,400 workers, or about 2%, mostly in its tech department, as it simplifies global operations. And Snap is planning to eliminate 16% of its workforce, or about 1,000 positions, as it aims to boost efficiency. In other areas of IT, which includes telecommunications and data-processing, employment is now down 11%, or 342,000 jobs, from its most recent peak in November 2022. But there's not just AI to blame. Inflation and economic uncertainty linked to the Iran conflict is giving some chief executives and tech leaders reason to pull back or pause their IT hiring, said Janco Chief Executive Victor Janulaitis. The article even notes that postings for software developer jobs "are up 15% year-over-year on job-search platform Indeed, according to Hannah Calhoon, its vice president of AI". But employers do seem to be looking for experienced developers, which could pose a problem for recent college graduates.


    Read more of this story at Slashdot.


  • The EU Considers Restricting Use of US Cloud Platforms for Sensitive Government Data
    CNBC reports:The European Union is considering rules that would restrict its member governments' use of U.S. cloud providers to handle sensitive data, sources familiar with the talks told CNBC. The European Commission — the EU's executive branch — is expected to present its "Tech Sovereignty Package" on May 27, which will include a range of measures aimed at bolstering the bloc's strategic autonomy in key digital areas. As part of preparations for that package, discussions are taking place within the Commission around limiting the exposure of sensitive public-sector data to cloud platforms provided by companies outside of the EU, two Commission officials, who asked to remain anonymous as they weren't authorized to discuss private talks, told CNBC... "The core idea is defining sectors that have to be hosted on European cloud capacity," one of the officials said. They added that companies providing cloud solutions from third countries, including the U.S., could be impacted. Proposals would not prohibit overseas companies' cloud platforms from government contracts entirely, but limit their use in processing sensitive data at public sector organizations, depending on the level of sensitivity, they added. The officials said that talks are ongoing and yet to be finalized... The officials told CNBC there are discussions around proposing that financial, judicial and health data processed by governments and public-sector organizations require high levels of sovereign cloud infrastructure.


    Read more of this story at Slashdot.


  • NYT: 'Meta's Embrace of AI Is Making Its Employees Miserable'
    "Meta's embrace of AI is making its employees miserable," reports the New York Times. And "After Meta said late last month that it would start tracking employees' computer use, hundreds of workers spoke up." (One employee even told Meta's CTO in an internal post, "Your callousness to the concerns of your own employees is concerning." In an internal post last month, Meta told its U.S. employees that it was making a change that would affect tens of thousands of them. What employees typed into their computer, how they moved their mouse, where they clicked and what they saw on their screen would be tracked, Meta said. The goal, the company said, was to capture employee data so Meta's artificial intelligence models could learn "how people actually complete everyday tasks using computers." Many workers immediately revolted. In online comments, they blasted the tracking as a privacy violation, calling it antisocial and callous... [One engineering manager even asked "How do we opt out?"] "There is no option to opt-out on your corporate laptop," replied Andrew Bosworth, Meta's chief technology officer. Employees reacted by posting more than 100 angry and surprised emoji, according to the messages.... Meta is pushing its 78,000 employees to adopt AI tools and factoring their use of the technology in performance reviews. The company is also tracking employees' computer work to feed and train its AI models. And it is cutting jobs to offset its AI spending, saying last month that it would slash 10% of its workforce. That has led to anger and anxiety as employees await news of whether they are affected by the layoffs, which are slated to be carried out May 20, according to 11 current and former Meta employees. Some said they no longer saw Meta as a place for a long career. Others were looking for new jobs or trying to signal that they wanted to be laid off so they could receive severance pay, the current and former employees said. "It's incredibly demoralizing," an employee who does user research wrote in an internal post, which was reviewed by the Times... Meta also introduced internal dashboards to track employees' consumption of "tokens," a unit of AI use that is roughly equivalent to four characters of text, four people said. Some said the dashboards were a pressure tactic to encourage competition with colleagues. That led some employees to make so many AI agents that others had to introduce agents to find agents, and agents to rate agents, two people said.


    Read more of this story at Slashdot.


  • 'Changing of the Guard'? AMD, Intel, and Micron Soar While Nvidia Lags
    While Nvidia has dominated the "infrastructure boom" since 2022's launch of ChatGPT and "the generative AI craze," CNBC writes that "This week offered the starkest illustration yet of what MIzuho analyst Jordan Klein said could be a 'changing of the guard in AI.'"Chipmakers Advanced Micro Devices and Intel notched gains of about 25%, while memory maker Micron jumped more than 37% and fiber-optic cable maker Corning climbed about 18%. All four of those companies have more than doubled in value this year, with Intel leading the way, up well over 200%. Nvidia, meanwhile, is only slightly ahead of the Nasdaq in 2026, gaining 15% for the year, aided by an 8% rally this week. In spreading the wealth to a wider swath of hardware companies, investors are clearly betting that the bull market in AI has long legs and that data centers are going to need a wider array of advanced components for years to come. Memory has been the biggest theme of late due to a global shortage that's driven up prices and turned Micron, a 47-year-old company tucked in a sleepy corner of the semiconductor market, into one of the hottest trades over the past 12 months. Micron blew past an $800 billion market capitalization for the first time this week, and the stock is now up over 750% in the past year. CEO Sanjay Mehrotra told CNBC in March that key customers are only getting "50% to two-thirds of their requirements" because of supply issues. The memory market is largely dominated by Micron, along with Korea-based Samsung and SK Hynix, which are also both in the midst of historic rallies... Bank of America estimates the data center CPU market could more than double from $27 billion in 2025 to $60 billion in 2030. AMD's quarterly results this week underscored the emerging trend, as earnings, revenue and guidance sailed past estimates on strong data center growth. The company has long led the CPU charge, and CEO Lisa Su said on the earnings call that AMD now expects 35% growth over the next three to five years in the server CPU market, up from a forecast of 18% growth that the company provided in November. The article cites two other big movers:Intel "is in the midst of a revival sparked by a major investment from the U.S. government last year. Intel's stock had its best month on record in April, more than doubling, and has continued notching massive gains, rising 33% in the early days of May." Nvidia still remains the world's most valuable company "and is expected to show revenue growth of 70% this fiscal year," the article points out — adding that companies like Corning are also benefiting from Nvidia partnerships. "Glass maker Corning, which celebrated its 175th anniversary this week, signed a massive deal with Nvidia on Wednesday that involves the development of three new U.S. factories dedicated entirely to optical technologies... likely a major step in Nvidia's move away from copper cables and towards fiber-optic cables as it builds out its rack-scale systems."


    Read more of this story at Slashdot.


  • Open Source Registries Join Linux Foundation Working Group to Address Machine-Generated Traffic
    Under the nonprofit Linux Foundation, "a new Sustaining Package Registries Working Group will seek to identify concrete funding, governance, and security practices," reports ZDNet, "to keep code flowing as download counts grow.... Because software builds, continuous integration pipelines, and AI systems hammer registries at machine speed rather than human speed, the sites can't keep up. "That growth has brought a surge in bot traffic, automated publishing, security reports, and outright abuse, exposing what the working group bluntly calls a 'sustainability gap'."Sonatype CTO Brian Fox, who oversees the Maven Central Java registry, estimates open-source registries saw 10 trillion downloads in 2025. And "The same pattern is appearing across ecosystems. More machine traffic. More automation. More scanning. More expectations around uptime, integrity, provenance, and policy enforcement. More cost. More support burden. More dependency on infrastructure that the industry still talks about as though it runs on goodwill and spare time." ZDNet reports that "To tackle that, Sonatype has teamed up with the Linux Foundation and other package registry leaders, including Alpha-Omega, Eclipse Foundation (OpenVSX), OpenJS Foundation, OpenSSF, Packagist, Python Software Foundation, Ruby Central (RubyGems), and the Rust Foundation (Crates)."The idea is to give operators a neutral forum to discuss money, governance, and shared operational burdens openly. Once that's dealt with, they'll coordinate how to explain those realities back to companies and organizations that have long assumed registries are "free." No, they're not. They never were. As the Linux Foundation pointed out, "Registries today run primarily on two things: (1) infrastructure donations and credits; and (2) heroic efforts from small paid teams (themselves funded by donations and grants) and unpaid volunteers that operate and maintain registry services. The bulk of donations and grants comes from a small set of donors and doesn't scale with demands on the registry." The working group is explicitly positioned as a venue where registry leaders and ecosystem stakeholders can align on "practical, community-minded" ways to sustain that infrastructure, rather than each operator improvising its own survival plan in isolation. ZDNet says the group will also coordinate security practices and information, and craft frameworks "that make it politically and legally possible to introduce sustainable funding models without fracturing communities." And they will also "align messaging and educational content so developers, companies, and policymakers finally understand what it costs to run these services."


    Read more of this story at Slashdot.


  • Will Maryland's Utility Bills Increase $1.6B to Support Other States' Datacenters?
    To upgrade its grid for data centers, PJM Interconnection (which serves 13 states) plans to spend $22 billion — and charge nearly $2 billion of that to customers in Maryland, argues Maryland's Office of People's Counsel. The money "will be recovered in rates for decades" and "drive up Maryland customer bills by $1.6 billion over the next ten years alone," they said Friday, announcing an official complaint filed with America's Federal Energy Regulatory Commission. Extra demand is expected from Ohio, Pennsylvania, and Illinois "where demands driven by data centers are projected to grow substantially by 2036," they explain. But that means that Maryland customers "are subsidizing data center-driven transmission buildout by virtue of geographic proximity..." Tom's Hardware explains:That means an extra $823 million for residential (approx. $345 per customer), $146 million for commercial (approx. $673 per customer), and $629 million for industrial customers (approx. $15,074 per customer)... "Maryland customers have neither caused the need for these billions in new transmission projects nor will they meaningfully benefit from them," [according to Maryland People's Counsel David S. Lapp].... This is one of the biggest reasons why many AI hyperscalers are facing pushback from the communities where they intend to place their data centers. At the moment, around 69 jurisdictions have passed some sort of moratorium on projects like these, and a survey has shown that nearly half of Americans do not want a data center in their neighborhood. Debates around these projects are passionate, with a few cases turning violent and even resulting in shootings (thankfully, without any casualties), especially as many feel that the construction of these power-hungry assets is threatening their lifestyles and quality of life. Thanks to long-time Slashdot reader noshellswill for sharing the news.


    Read more of this story at Slashdot.


  • Rush Rescue Mission for NASA's $500M Space Telescope Passes Key Milestone
    NASA's $500 million Neil Gehrels Swift space observatory was launched in 2004. But it's now "at risk of falling back through the atmosphere and burning up without intervention," reports Spaceflight Now. Fortunately, a mission to prevent that "just passed a notable prelaunch testing milestone."On Friday, NASA announced that the Link spacecraft, manufactured by Katalyst Space Technologies to intervene before Swift's fate is sealed, completed its slate of environmental testing at the agency's Goddard Space Flight Center in Greenbelt, Maryland... "Swift will likely re-enter the atmosphere sometime later this year if we don't attempt to lift it to a higher altitude, [said John Van Eepoel, Swift's mission director at NASA Goddard, in a NASA press release]. "Katalyst has gotten to this point in just eight months, and we're glad they were able to use NASA's facilities to test Link and draw on our expertise to help tackle questions that popped up along the way...." "Given how quickly Swift's orbit is decaying, we are in a race against the clock, but by leveraging commercial technologies that are already in development, we are meeting this challenge head-on," said Shawn Domagal-Goldman, acting director, Astrophysics Division, NASA Headquarters, at the time... Attempting an orbit boost is both more affordable than replacing Swift's capabilities with a new mission, and beneficial to the nation — expanding the use of satellite servicing to a new and broader class of spacecraft...." Swift is in an orbit inclined 20.6 degrees from the equator, which is why Katalyst selected Northrop Grumman's Pegasus XL air-launched rocket in November to fly the mission. "The versatility offered by Pegasus' unique air-launch capability provides customers with a space launch solution that can be rapidly deployed anywhere on Earth to reach any orbit," said Kurt Eberly, Director of Space Launch for Northrop Grumman. The mission is set to launch in June.


    Read more of this story at Slashdot.


  • The Trump Phone Either Is Or Isn't Closer To Delivery
    September 2025? January 2026? Delivery dates keep slipping for the Trump Organization's "Trump Phone" — a gold-coloured Android smartphone priced at $499 (£370). But in March the Verge spotted signs the phone was moving forward:FCC listings for a smartphone with the trade name "T1" show that it was tested late last year, and granted certification by the FCC in January... [T]he phone was submitted for testing by another company entirely: Smart Gadgets Global, LLC... Smart Gadgets Global's website promises "Top Quality Electronics created for 'YOUR' customer!" But in April the Trump phone revised its "Terms and Conditions" for preorders. The new language?A preorder deposit provides only a conditional opportunity if Trump Mobile later elects, in its sole discretion, to offer the Device for sale. A deposit is not a purchase, does not constitute acceptance of an order, does not create a contract for sale, does not transfer ownership or title interest, does not allocate or reserve specific inventory, and does not guarantee that a Device will be produced or made available for purchase.... Estimated ship dates, launch timelines, or anticipated production schedule are non-binding estimates only. Trump Mobile does not guarantee that: the Device will be commercially released... Trump Mobile will not be responsible for delay, modification, or failure to release a Device due to causes beyond its reasonable control, including but not limited to regulatory review, carrier certification delays, component shortages, labor disruptions, governmental orders, acts of God, transportation interruptions, or third-party supplier failures... If Trump Mobile cancels or discontinues the Device offering prior to sale, Trump Mobile will issue a full refund of the deposit amount paid... If Trump Mobile cancels, delays, or does not release the Device, your sole and exclusive remedy is a full refund of the deposit amount actually paid, and you waive any claim for equitable, injunctive, or specific performance relief relating to preorder priority or Device allocation. There was an unconfirmed report on social media that the updated Terms were also emailed to customers (cited by the International Business Times). And the new language also hedges that for the gold T1 phone, "Images, prototypes, beta demonstrations, and marketing renderings are illustrative only and may not reflect final production units...." But then eight days ago The Verge reported that phone "has just passed another milestone on its slow road to release," described as "a requirement for any phone launching in the US..." "The phone has received the little-known PTCRB certification, a first step toward being certified to work on major networks and be issued with IMEI numbers."[A]t least, I think it's been certified. What's actually been certified by the PTCRB is the SGG-06, a smartphone from Smart Gadgets Global, LLC, with support for 5G, 4G, 3G, and 2G networks.


    Read more of this story at Slashdot.


  • Plant Seeds Do Something Incredible When the Sound of Rain Strikes
    "Plant seeds can sense the vibrations generated by falling raindrops," reports ScienceAlert, "and respond by waking from their state of dormancy to welcome the water, new research shows.... to germinate in 'anticipation' of the coming deluge."The finding, discovered by MIT mechanical engineers Nicholas Makris and Cadine Navarro, offers the first direct evidence that seeds and seedlings can sense and respond to sounds in nature... "The energy of the rain sound is enough to accelerate a seed's growth," [explains Markis]. Plants don't have the same aural equipment we do to actually hear sounds, of course. But the study suggests that seeds respond to the same vibrations that can produce a sound experience in our human ears. Across a series of experiments, the researchers submerged nearly 8,000 rice seeds in shallow tubs of water, at a depth of around 3 centimeters (1 inch), and exposed some of them to falling water drops over periods of six days... A hydrophone recorded the acoustic vibrations produced by the drops, confirming that the experiment mimicked the vibrations produced by actual raindrops falling in nature — such as the driving downpours that can sometimes pelt Massachusetts' puddles, ponds, and wetlands... In their study, the researchers observed that seeds exposed to the falling drops germinated up to around 37% faster, compared with seeds that did not receive the simulated rainstorm treatment but were housed in otherwise identical conditions. More information in Scientific American and Scientific Reports.


    Read more of this story at Slashdot.


  • Cisco Releases Open-Source 'DNA Test for AI Models'
    Cisco has released an open-source tool "to trace the origins of AI models," reports SC World, "and compare model similarities for great visibility into the AI supply chain."[Cisco's Model Provenance Kit] is a Python toolkit and command-line interface (CLI) that looks at signals such as metadata and weights to create a "fingerprint" for AI models that can then be compared to other model fingerprints to determine potential shared origins. "Think of Model Provenance Kit as a DNA test for AI models," Cisco researchers wrote. "[...] Much like a DNA test reveals biological origins, the Model Provenance Kit examines both metadata and the actual learned parameters of a model (like a unique genome that comprises a model), to assess whether models share a common origin and identify signs of modification." The tool aims to address gaps in visibility into the AI model supply chain. For example, many organizations utilize open-source models from repositories like HuggingFace, where models could potentially be uploaded with incomplete or deceptive documentation. The Model Provenance Kit provides a way for organizations to verify claims about a model's origins, such as claims that a model is trained from scratch, when in reality it may be copied from another model, Cisco said. This may put organizations at risk of using models with unknown biases, vulnerabilities or manipulations and make it more difficult to resolve any incidents that arise from these risks. Thanks to Slashdot reader spatwei for sharing the news.


    Read more of this story at Slashdot.


  • Social Media Sites Got Information from Ad Trackers on US State Health Insurance Sites
    All 20 of America's state-run healthcare marketplace sites "include advertising trackers that share information with Big Tech companies," reports Gizmodo, citing a report from Bloomberg:Per the report, seven million Americans bought their health insurance through state exchanges in 2026, and many of them may have had personal information shared with companies, including Meta, TikTok, Snap, Google, Nextdoor, and LinkedIn, among others. Some of the data collected and shared with those companies included ZIP codes, a person's sex and citizenship status, and race. In addition to potentially sensitive biographical details about a person, the trackers also may reveal additional details about their life based on the sites they visit. For instance, Bloomberg found trackers on Medicaid-related web pages in Rhode Island, which could reveal information about a person's financial status and need for assistance. In Maryland, a Spanish-language page titled "Good News for Noncitizen Pregnant Marylanders" and a page designed to help DACA recipients navigate their healthcare options were found to be transmitting data to Big Tech firms... Per Bloomberg, several states have already removed some trackers from their exchange websites following the report. Thanks to Slashdot reader JoeyRox for sharing the news.


    Read more of this story at Slashdot.


  • 10 People Called Police to Report Bigfoot Sighting in Ohio
    CNN reports on a "sudden surge of claimed sightings" of "unidentified figures averaging 8 feet tall in wooded areas" along Ohio's Mahoning River."And it stopped just as quickly as it started," says Jeremiah Byron, host of the Bigfoot Society Podcast, which collected and mapped the reports .... Byron doesn't take every report at face value, making sure he talks to people directly before publicizing their claims. Once word got out about the reports in Ohio, so did the obvious fakes. "I started to get a lot of AI-generated reports in my email.It got up to the point where I was probably getting about 1,000 emails a day," he says. But when Byron spoke by phone with people who made the initial reports, they convinced him they weren't making anything up. "It was obvious they weren't just wanting to get their name out there," says Byron. "They were just freaked out by what they experienced, and they didn't want anything else to do with it." [...] Local law enforcement in Ohio also seem to be enjoying the publicity. Portage County Sheriff Bruce D. Zuchowski made a series of gag posts purporting to show the arrest of Bigfoot and his detention by Immigration and Customs Enforcement, only for the creature to escape from custody at the Canadian border... Despite the levity, the sheriff's office really did get some calls from concerned residents, Zuchowski says. "Ten individual people were like, 'Yeah I was walking my dog at 4 a.m. and I saw this hairy figure and I smelled this musty odor and there was this big thing and all of a sudden it ran,'" the sheriff told CNN affiliate WOIO in March.


    Read more of this story at Slashdot.


www.theregister.com - Articles



















































Polish Linux

  • Security: Why Linux Is Better Than Windows Or Mac OS
    Linux is a free and open source operating system that was released in 1991 developed and released by Linus Torvalds. Since its release it has reached a user base that is greatly widespread worldwide. Linux users swear by the reliability and freedom that this operating system offers, especially when compared to its counterparts, windows and [0]


  • Essential Software That Are Not Available On Linux OS
    An operating system is essentially the most important component in a computer. It manages the different hardware and software components of a computer in the most effective way. There are different types of operating system and everything comes with their own set of programs and software. You cannot expect a Linux program to have all [0]


  • Things You Never Knew About Your Operating System
    The advent of computers has brought about a revolution in our daily life. From computers that were so huge to fit in a room, we have come a very long way to desktops and even palmtops. These machines have become our virtual lockers, and a life without these network machines have become unimaginable. Sending mails, [0]


  • How To Fully Optimize Your Operating System
    Computers and systems are tricky and complicated. If you lack a thorough knowledge or even basic knowledge of computers, you will often find yourself in a bind. You must understand that something as complicated as a computer requires constant care and constant cleaning up of junk files. Unless you put in the time to configure [0]


  • The Top Problems With Major Operating Systems
    There is no such system which does not give you any problems. Even if the system and the operating system of your system is easy to understand, there will be some times when certain problems will arise. Most of these problems are easy to handle and easy to get rid of. But you must be [0]


  • 8 Benefits Of Linux OS
    Linux is a small and a fast-growing operating system. However, we can’t term it as software yet. As discussed in the article about what can a Linux OS do Linux is a kernel. Now, kernels are used for software and programs. These kernels are used by the computer and can be used with various third-party software [0]


  • Things Linux OS Can Do That Other OS Cant
    What Is Linux OS?  Linux, similar to U-bix is an operating system which can be used for various computers, hand held devices, embedded devices, etc. The reason why Linux operated system is preferred by many, is because it is easy to use and re-use. Linux based operating system is technically not an Operating System. Operating [0]


  • Packagekit Interview
    Packagekit aims to make the management of applications in the Linux and GNU systems. The main objective to remove the pains it takes to create a system. Along with this in an interview, Richard Hughes, the developer of Packagekit said that he aims to make the Linux systems just as powerful as the Windows or [0]


  • What’s New in Ubuntu?
    What Is Ubuntu? Ubuntu is open source software. It is useful for Linux based computers. The software is marketed by the Canonical Ltd., Ubuntu community. Ubuntu was first released in late October in 2004. The Ubuntu program uses Java, Python, C, C++ and C# programming languages. What Is New? The version 17.04 is now available here [0]


  • Ext3 Reiserfs Xfs In Windows With Regards To Colinux
    The problem with Windows is that there are various limitations to the computer and there is only so much you can do with it. You can access the Ext3 Reiserfs Xfs by using the coLinux tool. Download the tool from the  official site or from the  sourceforge site. Edit the connection to “TAP Win32 Adapter [0]


OSnews

  • Google is tying reCAPTCHA to Google Play Services, screwing over de-Googled Android users
    The ways in which Google can lock you into their ecosystem are often obvious, but sometimes, theyre incredibly sneaky and easily missed. CAPTCHA tests are annoying, but at the same time, they can help protect websites from bots. While these tests are already the bane of our internet existence, they are going to get worse for some Android users. A requirement for Google’s next-generation reCAPTCHA system will make it a lot harder for de-Googled phones to browse the web. A Reddit user has highlighted a seemingly innocuous support page for Google’s reCAPTCHA system. The page in question relates to troubleshooting reCAPTCHA verification on mobile. In the document, it says that you’ll need to use a compatible mobile device to complete verification. If you have an Android phone, then that means you’ll need to be running Google Play Services version 25.41.30 or higher. ↫ Ryan McNeal at Android Authority When was the last time you actively thought about reCAPTCHA being a Google property? Even then, when was the last time you imagined something as annoying but ultimately basic as a captcha prompt could be used to tie people to Google Play Services, and thus to blessed! Android? Every time we manage to work around one of these asinine ties to Google Play Services, another one pops up to ruin our day. Were so stupidly tied down to and entirely dependent on two very mid  at best  mobile operating systems, and its such a stupid own goal for especially everyone outside of the US to just sit there and do nothing about it. Worse yet, it seems were only tying ourselves down further, while paying for the privilege. At the very least we should be categorising certain services  government ID services, payment services, popular messaging platforms, and a few more  as vital infrastructure, and legally mandate these services have clearly defined and well-documented APIs so anyone is free to make alternative clients. The fact that many people are tied to either iOS or blessed! Android because of something as stupid as what bank they use or the level of incompetency of their government ID service should be a major crisis in any country that isnt the US. I dont want to use iOS or Android, but nobody is leaving me any choice. Its infuriating.


  • Why don’t lowercase letters come right after uppercase letters in ASCII?
    With that context, I always found it strange that the designers of ASCII included 6 characters after uppercase Z before starting the lowercase letters. Then it hit me: we have 26 letters in the English alphabet, plus 6 additional characters before lowercase starts: 26 + 6 = 32. If you know anything about computers, powers of 2 tend to stick out. Let’s take a look at the binary representations of some characters compared to their lowercase counterparts. ↫ Tyler Hillery I only have a middling understanding of the rest of the article and thus the ultimate reason why ASCII includes those six characters between Z and a, but I think it comes down to making certain operations on uppercase and lowercase letters specifically more elegant. In some deep crevices of my brain all of this makes sense, but I find it very difficult to truly understand and explain as someone who knows little about programming.


  • Detecting (or not) the use of -l and -c together in Bourne shells
    Many Bourne shells go slightly beyond the POSIX sh specification to also support a -l option that makes the shell act as a login shell. POSIXs omission of -l isnt only because it doesnt really talk about login shells at all, its also because Unix has a special way of marking login shells that goes back very far in its history. The -l option isnt necessarily what login and sshd and so on use, its something that you can use if you specifically want to get a login shell in an unusual circumstance. Bourne shells also have a -c `command stringb option that causes the shell to execute the command string rather than be interactive (this is a long standing option that is in POSIX). It may surprise you to hear that most or all Bourne shells that support -l also allow you to use -l and -c together. Basically all Bourne shells interpret this as first executing your .profile and so on, then executing the command string instead of going interactive. One use for this is to non-interactively run a command line in the context of your fully set up shell, with $PATH and other environment variables ready for use. ↫ Chris Siebenmann Now, what if you want to detect the use of these two options combined, for instance to make it so certain parts of your .profile are ignored? It turns out very few Bourne shells actually support this, and thats what Siebenmanns latest post is about.


  • Fedora Project Leader says he doesnt care about the reputational damage from Fedora embracing AI!
    On the Fedora forums, theres a long-running thread about a proposal for Fedora to build a variant of the distribution aimed specifically at AI!. The problem! identified in the proposal is that setting up the various parts that a developer in the AI! space needs is currently quite difficult on Fedora, and as such, a bunch of technical steps need to be taken to make this easier. Setting aside the AI! of the proposal and ensuing discussion, its actually a very interesting read, going deep into the weeds about consequential questions like building an LTS kernel on Fedora, support for out-of-tree kernel mods, and a lot more. To spoil the ending: the proposal has already been approved unanimously by the Fedora Council, meaning the efforts laid out in the proposal will be undertaken. This means that, depending on progress, well see a Fedora AI! Desktop or whatever its going to be called somewhere in the timeframe from Fedora 45 to Fedora 47. As a Fedora user on all my machines, Im obviously not too happy about this, since Id much rather the scarce resources of a project like Fedora goes towards things not as ethically bankrupt, environmentally destructive, and artistically deficient as AI!, but in the end its a project owned and controlled by IBM, so its not exactly unexpected. What really surprised me in this entire discussion is a post by Fedora Project Leader Jef Spaleta, responding to worries people in the thread were having about such a big AI! undertaking under the Fedora branding causing serious reputational damage to Fedora as a whole. These concerns are clearly valid, as people really fucking hate AI!, doubly so in the open source community whose work especially AI! coding tools are built on without any form of consent. As such, Fedora undertaking a big AI! desktop project is bound to have a negative impact on Fedoras image. Just look at what aggressively pushing Copilot has done to Windows 11s already shit reputation. Spaleta, however, just doesnt care. Literally. As the Fedora Project Leader, I am absolutely not concerned about the reputational damage to this project that comes with setting up an entirely new output attractive to developers who want to make use of Ai tools. ↫ Jef Spaleta Ive been looking at this line on and off for a few days now, and I just cant wrap my head around how the leader of an open source project built on and relying on the free labour of thousands of contributors says he doesnt care about reputational damage to the project hes leading. Effective and capable open source contributors are not exactly a commodity, and a lot of the decisions they make about what projects to donate their time to are based on vibes and personal convictions  you cant really pay them to look the other way. Saying you dont care about reputational damage to your huge open source project seems rather shortsighted, but of course, I dont lead a huge open source project so what do I know? In the linked thread alone, one long-time Fedora contributor, Fernando Mancera, already decided to leave the project on the spot, and I have a sneaking suspicion he wont be the last. AI! is a deeply tainted hype on many levels, and the more you try to chase this dragon, the more capable people youll end up chasing away.


  • Redox gets partial window pixel updating, tmux, and more
    Another month, another progress report, Redox, etc. etc., you know the drill by now. This past month Redox saw improved booting on real hardware by making sure the boot process continues even if certain drivers fail or become blocked. Thanks to some changes on the RISC-V side, running Redox on real RISC-V hardware has also improved. Furthermore, tmux has been ported to Redox, CPU time reporting has been improved, and Orbital, Redox desktop environment, gianed support for partial window pixel updating, which should increase UI performance. On top of that, theres a brand new web user interface to browse Redox packages (x86-64, i586, ARM64 (aarch64), and RISC-V (riscv64gc)), as well as the usual list of improvements to the kernel, drivers, relibc, and many more areas of the operating system.


  • Setting up a Sun Ray server on OpenIndiana Hipster 2025.10
    Time for another Sun Ray blog post! Ive had a few people email me asking for help setting up a Sun Ray server over the last few months, and despite my attempts to help them get it going theres been mixed results with running SRSS on OpenIndiana Hipster 2025.10. my Sun Ray server is still on an earlier OI snapshot, so I figured it was about time to try to actually follow the new guides myself. ↫ The Iris System Ever since my spiraling down the Sun rabbit hole late last year, Ive tried for a few times now to get the x86 version of OpenIndiana and Oracle Solaris working on any of my machines, exactly for the purposes of setting up a modern Sun Ray server. Sadly, none of my machines are compatible with any illumos distribution or Oracle Solaris, so Ive been shit out of luck trying to get this side project off the ground. My Ultra 45 is sadly also not supported by any SPARC version of illumos or Oracle Solaris, so unless I buy even more hardware, my dream of a modern Sun Ray setup will have to wait. Of course, virtualisation is an option for many, and thats exactly what this particular guide is about: setting up OpenIndiana on a Proxmox virtual machine. I actually have a Proxmox machine up and running and could do this too, but Im a sucker for running stuff like this on real hardware. Yes, that makes my life more complicated and difficult, and no, its not more noble or real or hardcore  its just a preference. Still, for normal people who pick up a Sun Ray or two on eBay for basically nothing, running OpenIndiana in a virtual machine is the smart, reasonable, and effective option.


  • My favorite device is a Chromebook, without ChromeOS!
    If youre sick of Chrome OS on your Chromebook, or can find a Chromebook for cheap somewhere but dont actually want to use Chrome OS, have you considered postmarketOS? Since I was kind frustrated with ChromeOS, I decided to take a look at something that I knew supported my Lenovo Duet 3 for some time: postmarketOS. For those who dont know, postmarketOS is an Alpine Linux based-distro focused in replacing the original OS from old phones (generally running Android) with a true! Linux distro. They also seem to support some Chromebooks because of their unique architecture and, luckily, they support my device under the google-trogdor platform. ↫ kokada PostmarketOS is aimed at smartphones primarily, but supports other formfactors just fine as well. The Duet 3 is one of the tablet-like devices it supports, and it seems most things are working quite well. In fact, judging by the postmarketOS wiki, quite a few Chromebooks have good support, and with Chromebooks being cheap and dime-a-dozen on eBay and similar auction sites, it seems like a great way to get started with what is trying to become a true Linux for smartphones.


  • The text mode lie: why modern TUIs are a nightmare for accessibility
    There is a persistent misconception among sighted developers: if an application runs in a terminal, it is inherently accessible. The logic assumes that because there are no graphics, no complex DOM, and no WebGL canvases, the content is just raw ASCII text that a screen reader can easily parse. The reality is different. Most modern Text User Interfaces (TUIs) are often more hostile to accessibility than poorly coded graphical interfaces. The very tools designed to improve the Developer Experience (DX) in the terminal—frameworks like Ink (JS/React), Bubble Tea (Go), or tcell—are actively destroying the experience for blind users. ↫ Casey Reeves The core reason should be obvious: the command-line interface, at its core, is just a stream of data with the newest data at the bottom, linearly going back in time as you go up. Any screen reader can deal with this fairly easily, and while I personally have no need for such a tool, Ive heard from those that do that kernel-level screen readers are quite good at what they do. TUIs, or text-based user interfaces, made with modern frameworks are actually very different: theyre 2D grid of pixels, where every character cell is a pixel. abandons the temporal flow for a spatial layout.! It should become immediately obvious that screen readers wont really know what to do with this, and Reeves gives countless examples, but the short version is this: the cursor jumps all over the place with every screen update, which makes screen readers go nuts. Various older TUIs, made in a time well before these modern TUI frameworks came about, were designed in a much more terminal-friendly way, or give you options to hide the cursor to solve the problem that way. Irssi, for example, uses VT100 scrolling regions instead of redrawing the whole screen every time something changes. I had never really stopped to think about TUIs and screen readers, as is common among us sighted people. The problems Reeves describes seem to stem not so much from TUIs being inherently inaccessible, but from modern frameworks not actually making use of the terminals core feature set. I really hope this Reeves article shines a light on this problem, and that the people developing these modern TUIs start taking accessibility more seriously.


  • Using duplicity to back up your FreeBSD desktop
    Backing up in modern times, we’ve had ZFS snapshots and replication to make this task extremely easy. However, you may not have access to another ZFS endpoint for replication, need to diversify risk by using a non-ZFS tool for backup, or are simply using UFS2, living the old skool life. For these situations, my first recommendation is to lean on Tarsnap for its ease of use and simplicity, making restoration just as easy as backing up. But some situations call for a different approach. Maybe you have a strict firewall at your company that doesn’t allow Tarsnap data streams to egress from your corporate network, or you have internal/easy access to storage endpoints, such as S3-compatible object storage or a large-file storage location with SFTP access. When you are faced with the latter, the duplicity (sysutils/duplicity in ports) utility is available as an easily installable package onto your FreeBSD system. ↫ Jason Tubnor at the FreeBSD Foundation The rest of the article explains how to use duplicity on FreeBSD for the purpose described above.


  • Testing MacOS on the Apple Network Server 2.0 ROMs
    Earlier this year, Mac OS and Windows NT-capable ROMs were discovered for Apple’s unique AIX Network Server. Cameron Kaiser has since spent more time digging into just how capable these ROMs are, and has published another one of his detailed stories about his efforts. Well, thanks to Jeff Walther who generously built a few replica ROM SIMMs for me to test, we can now try the 2.0! MacOS ROMs on holmstock, our hard-working Apple Network Server 700 test rig (stockholm, my original ANS 500, is still officially a production unit). And there are some interesting things to report, especially when we pit the preproduction ROMs and this set head-to-head in MacBench, and even try booting Rhapsody on it. ↫ Cameron Kaiser A great read, as always.


  • Windows gets a new Run dialog
    With Windows being as old and long-running as it is, theres a ton of old and outdated bits and pieces lurking in every nook and cranny. I have always found these old relics fascinating, especially now that over the past few years, Microsoft has attempted to replace some of those bits and pieces with modern replacements (not always to great success, but thats another story). One of those parts of the UI thats been virtually unchanged since the release of Windows 95 is the Run dialog, but thats about to change: Microsoft has released a completely new Run dialog to early testers. Windows Run, also known as the Run dialog, is a surface that has been around for over 30 years. It has become a heavily relied upon tool for developers and advanced users alike. Users have decades of muscle memory where they hit Win+R, navigate through their Run history, and hit Enter to quickly access various paths and tools. We all have our favorite tool we launch there as well. For us, some of our favorites are wt (Windows Terminal), mstsc (Remote Desktop) and winword (Microsoft Word). But it’s more than jUsT a TeXt BoX tHaT rUnS tHiNgS. The Run dialog can handle navigating both local and network file paths as well. And everything it does, it does fast. Win+R opens the run dialog seemingly instantly. If we wanted to modernize the Run Dialog to fit the modern Windows 11 design style, we had to make sure it did everything just as well as before. We needed to maintain the same performance while also keeping the user interface minimal, just as Windows 95 intended. ↫ Clint Rutkas at the Microsoft Dev Blogs The new Run dialog looks like it belongs in Windows 11, which is a nice improvement, but the most important part is that they actually seem to have made it a little faster. Sure, they may have only shaved off a few milliseconds from its opening time, but considering virtually everything else theyve touched in Windows over the years got considerably slower, thats a good showing for Microsoft. The new feature theyve added is that by typing ~\, you can open your home directory. The one casualty is the browse button, which according to Microsofts data, literally nobody ever used. I know its just a small thing and in the end not even a remotely consequential one, but with an operating system as old and storied as Windows, replacing these ancient parts that millions of people rely on every day absolutely fascinates me. There must be a considerable amount of pressure on the people developing something like this new Run dialog, especially with Windows reputation being at one of its lowest points, so its good to see them being able to deliver. The new Run dialog is available today for testers, and if youre on the Windows Insider Experimental Channel, you can enable it in Settings > System > Advanced. Coincidentally, on my Windows 11 machine that I use for just one stupid video game, this Advanced page displays a loading spinner for five minutes and then just dies. Also, Notepad wont start (one time it showed this dialog), and using the terminal to load it causes the old Win32 version of Notepad to open after 5 minutes of waiting, which then hangs and crashes. People pay money for this.


  • GNOME is good, actually
    While Im normally a KDE user, I do keep close tabs on various other desktop environments, and install and set them up every now and then to see how theyre fairing, what improvements theyve made, and ultimately, if my preference for KDE is still warranted. This usually means setting up a nice OpenBSD installation for Xfce, Fedora for GNOME, and less often others for some of the more niche desktop environments. Since GNOME 50 was just released, guess whos time in the round is up? Since everybodys already made up their mind about their preferred desktop eons ago, with upsides and downsides debated far past their expiration date, Im not particularly interested in reviewing desktop environments or Linux distributions. However, after asking around on Fedi, it seemed there was quite a bit of interest in an article detailing how I set up GNOME, what changes I make to the defaults, which extensions I use, what tweaks I apply, and so on. Of course, everything described in this article is highly personal, and Im not arguing that this is the optimal way to tweak GNOME, that the extensions I use are the best ones, or that any visual modifications I make are better than whatever defaults GNOME uses. No, my goal with this article is twofold: one, to highlight that GNOME is a lot more configurable, extensible, and malleable than common wisdom on the internet would have you believe. Its not KDE or one of those cobbled-together tiling Wayland desktops, but its definitely not as rigid as you might think. And two, that GNOME is good, actually. Tools of the trade The first thing I do is install a few crucial tools that make it easier to modify and tweak GNOME. I really dislike lists in articles, but I will begrudgingly use one here: After installing all of these tools, the actual tweaking can commence. Visual tweaks I didnt use to like GNOMEs Adwaita visual style, but over the years, it started growing on me to the point where I dont actively dislike it anymore. With the arrival of libadwaita, it has also become effectively impossible to theme modern GNOME applications, so even if you do change to something else, many of your applications wont follow along. If consistency is something you care about, youll stick to Adwaita, but that leaves one problem unresolved: applications that still use GTK3. These applications will follow a much older version of Adwaita, making them stand out like eyesores among all the modern GTK4 stuff. Luckily, since GTK3 applications are still properly themable, this is easily fixed: just install the adw-gtk3 theme, either by hand, or through your distributions repositories. To enable it, first install the user themes extension through Extension Manager, and then enable the theme in GNOME Tweaks for Legacy Applications!. Any potential GTK3 applications you still use will now integrate nicely with modern libadwaita applications. The one part of GNOME I really do deeply dislike is its icon theme. I cant quite explain why I dislike this icon set so much, but it runs deep, so one of the very first things I do is replace the default GNOME icon set with my personal favourite, Qogir. This is a popular icon set, so its usually available in your distributions repositories, but I always install it from its GitHub page. Changing GNOMEs icon set is as simple as selecting it in GNOME Tweaks. You cant get much more personal taste than an icon set, and there are dozens of amazing sets to choose from in the Linux world. Changing them out and trying out new ones is stupidly easy, and its definitely worth looking at a few that might be more pleasing to you than GNOMEs (or KDEs) default. Lastly, I open Add Water and enable the amazing GNOME theme for LibreWolf. Add Water basically makes this as easy as flipping a switch, so theres no need to copy any files into your LibreWolf profile or whatever. The application also provides a few more small tweaks to fiddle with, like enabling standard tab widths so tabs dont grow and shrink as you close and open tabs, moving the bookmarks bar below the tab bar, and many more. Extensions Since the release of GNOME 3 in 2011, extensions have been the most capable way to modify GNOMEs look, behaviour, and feature set. As far as I can tell, while the extension framework is an official part of the GNOME Shell, the extensions themselves are all third-party and not part of a vanilla GNOME installation. By now, there are over 2800 listed extensions, but that number includes abandoned extensions so its hard to determine the actual number of currently-maintained ones. Whatever the actual number is, theres bound to be things in there youre going to want to use. Here are the extensions I have installed. Lets just start at the top and work our way down. I guess Im forced to do another list. There are countless more extensions to choose from, and youre definitely going to find things you never even thought could be useful. Miscellaneous tweaks Theres a few other things I modify. In GNOME Tweaks, I make it so that double-clicking a windows titlebar minimises it while right-clicking it lowers it; two features I picked up during my years as a BeOS user that I absolutely refuse to give up. I configure the dock from Dash to Dock so that it always remains on top and never hides itself, no matter the circumstances. In Settings, I disable virtual desktops entirely (I dont like virtual desktops), and I make sure tap-to-click is disabled (if Im on a laptop). GNOME is good, actually After making all of these changes, I feel quite comfortable using GNOME, at least on my laptop. Its a nice, coherent experience, and offers what is probably the most polished graphical user interface you can find on Linux, even if it isnt the most full-featured. The third-party application ecosystem, through modern


  • How fast is a macOS VM, and how small could it be?
    To assess how small a macOS VM could be, I ran the same VM of macOS 26.4.1 on progressively smaller CPU core and memory allocations, using my virtualiser Viable. The VM’s display window was set to a standard 1600 x 1000, and I ran Safari through its paces and performed some lightweight everyday tasks, including Storage analysis in Settings. Starting with 4 virtual cores and 8 GB vRAM, where the VM ran perfectly briskly with around 5 GB of memory used, I stepped down to 3 cores and 6 GB, to discover that memory usage fell to 3.9 GB and everything worked well. With just 2 cores and 4 GB of memory only 3.1 GB of that was used, and the VM continued to handle those lightweight tasks normally. ↫ Howard Oakley This is good news for people interested in the MacBook Neo who may also want to run a macOS virtual machine on it.


  • Email is crazy
    Email is like those creaking old Terminators from the ’70s which continue to function without complaining. Designed for a world that doesn’t exist anymore, it has optional encryption, no built-in auth, three⁺ retrofitted security layers bolted on top, an unstandardized filtering layer and many more quirks. Yet billions of emails arrive correctly every single day. Email is not elegant but nonetheless it is Lindy. In the new age of agentic AI, we can only expect it to metamorphose into another dimension. ↫ Saurabh Sam! Khawase The fact that email is as complicated as it is bad enough, but having it be so dominantly controlled by only a few large gatekeepers like Google and Microsoft surely isnt helping either. I feel like email is no longer really a technology individuals can actively partake in at every level; it feels much more like WhatsApp or iMessage or whatever in that we just get to send messages, and thats it. Running your own mail sever isnt only a complex endeavour, its also a continuous cat-and-mouse game with companies like Google and Microsoft to ensure you dont end up on some shitlist and your emails stop arriving. I settled on Fastmail as my email service, and it works quite well. Still, I would love to be able to just run my own email server, or have some of my far more capable friends run one for a small group of us, but its such a daunting and unpleasant effort few people seem to have the stomach and perseverance for it.


  • The day I logged 1 in every 2000 public IPv4: visualizing the AI scraper DDoS
    What if you run a few online services for you and your friends, like a small git instance and a grocery list service, but you get absolutely hammered by AI! scrapers? I cannot impress upon you, reader, that this is not only an attack that is coordinated, it is an attack that is distributed. I run a small set of services, basically only for me and my friends. I am not a hyperscaler, I am not a tech company, I am not even a small platform. I have a git forge where I put the shit I make, and a couple other services where me and my friends backup our files or write our grocery lists. I am not fucking Meta and I cannot scale the fuck up just because OpenAI or Anthropic or Meta or whoever is training a model that weeks wants to suck all the content out of my VPS ONCE MORE until it’s dry. ↫ lux at VulpineCitrus So how much traffic did the author of this piece, lux, get from AI! scraping bots? Within a time period of 24 hours, they were hammered by 2040670 unique IP addresses, 98% of which were IPv4 addresses, which means that 1 out of every 2000 publicly available IPv4 addresses were involved in the scraping. Together, they performed over 5 million requests. And just to reiterate: they were scraping a few very small, friends-only services run by some random person. This is absolutely insane. If, at this point in time, with everything that we know about just how deeply unethical every single aspect of AI! is, youre still using and promoting it, what is wrong with you? If youre so addicted to your AI! girlfriends unending stream of useless, forgettable sycophantic slop, despite being aware of the damage youre doing to those around you, theres something seriously wrong with you, and you desperately need professional help. You dont need any of this. The world doesnt need any of this. Nobody likes the slop AI! regurgitates, and nobody likes you for enabling it. Get help.


  • Earliest 86-DOS and PC-DOS code released as open source
    Microsoft is continuing its efforts to release early versions of DOS as open source, and today weve got a special one. We’re stoked today to showcase some newly available source code materials that provide an even earlier look into the development of PC-DOS 1.00, the first release of DOS for the IBM PC. A dedicated team of historians and preservationists led by Yufeng Gao and Rich Cini has worked to locate, scan, and transcribe the stack of DOS-era source listings from Tim Paterson, the author of DOS. The listings include sources to the 86-DOS 1.00 kernel, several development snapshots of the PC-DOS 1.00 kernel, and some well-known utilities such as CHKDSK. Not only were these assembler listings, but there were also listings of the assembler itself! This work offers rare insight into how MS-DOS/PC-DOS came to be, and how operating system development was done at the time, not as it was later reconstructed. ↫ Stacey Haffner and Scott Hanselman Its wild that the source code had to be transcribed from paper, including notes and changes. You can find more information about the process on Gao’s website and Cini’s website.


Linux Journal - The Original Magazine of the Linux Community

  • Linux 7.1-rc2 Released with Driver Fixes, Steam Deck OLED Audio Repair, and Growing AI Patch Trends
    by George Whittaker
    Linus Torvalds has officially released Linux kernel 7.1-rc2, the second release candidate in the Linux 7.1 development cycle. While Torvalds described the update as a “fairly normal” RC release, the kernel includes a broad collection of driver fixes, subsystem cleanups, and stability improvements that continue shaping the next major Linux kernel release.

    Although still an early testing version intended mainly for developers and enthusiasts, Linux 7.1-rc2 already delivers several notable fixes—especially for graphics hardware, networking, and gaming devices like the Steam Deck OLED.
    A Strange-Looking Release—But for a Good Reason
    One of the first things Torvalds mentioned in the release announcement was the unusually large patch statistics. At first glance, the release appears much larger than expected, but there’s an explanation behind the inflated numbers.

    Much of the activity comes from a large cleanup effort in the KVM selftests subsystem, where developers renamed variables and types to better match Linux kernel coding conventions. Because thousands of lines were renamed rather than fundamentally rewritten, the patch count looks dramatic even though the underlying functional changes are relatively modest.

    Torvalds specifically advised testers not to overreact to the “big and strange” diff statistics.
    Graphics and Driver Fixes Take Center Stage
    As is common during early release candidates, a large portion of the work in Linux 7.1-rc2 focuses on hardware drivers. GPU and networking drivers account for a significant share of the meaningful fixes in this release.

    Notable improvements include:
    Additional fixes for AMD GPU support Intel Xe graphics driver adjustments and tuning Networking stability improvements Filesystem fixes, including NTFS driver updates Memory leak patches and race-condition corrections
    These kinds of updates are critical during the RC phase because they help stabilize hardware compatibility before the final release reaches mainstream distributions.
    Steam Deck OLED Audio Finally Gets Fixed
    One of the more interesting fixes in Linux 7.1-rc2 addresses a long-standing issue affecting the Steam Deck OLED. According to reports, audio support for Valve’s handheld had been broken in the mainline Linux kernel for nearly two years, forcing Valve and some handheld-focused distributions to carry their own downstream patches and workarounds.

    With Linux 7.1-rc2, an upstream fix for the audio issue has finally landed, potentially simplifying support for Linux gaming handhelds moving forward.

    For Linux gamers and portable gaming enthusiasts, this is one of the more practical improvements included in the release candidate.
    Go to Full Article


  • LibreOffice 26.4 Beta Experiments with AI Writing Features and Smarter Editing Tools
    by George Whittaker
    The upcoming LibreOffice 26.4 Beta is introducing early AI-powered writing capabilities, signaling a new direction for the open-source office suite. While LibreOffice has traditionally focused on privacy, local processing, and open standards, the beta release shows that The Document Foundation is now exploring how artificial intelligence can assist users without fully embracing cloud-dependent ecosystems.

    The result is a cautious but notable step toward AI-enhanced productivity on Linux and other desktop platforms.
    AI Writing Assistance Comes to LibreOffice
    One of the biggest additions connected to LibreOffice 26.4 Beta is expanded support for AI-assisted writing tools through integrations such as WritingTool, an open-source LibreOffice extension designed to enhance editing workflows.

    These AI features focus on practical writing assistance rather than aggressive automation. Current capabilities include:
    Grammar and style suggestions Paragraph rewriting and refinement Text expansion and summarization Translation assistance AI-assisted content generation
    Unlike many proprietary AI platforms, these tools can operate using local AI models, allowing users to avoid sending documents to external cloud services.
    A Privacy-Focused Approach to AI
    LibreOffice’s AI direction differs from the strategies used by many commercial office suites. Instead of tightly integrating mandatory cloud AI services, the project appears focused on:
    Optional AI functionality User-controlled integrations Support for local inference servers Compatibility with self-hosted AI solutions
    The WritingTool project specifically highlights support for local AI backends and OpenAI-compatible APIs, including self-hosted tools like LocalAI.

    This approach aligns closely with the values of many Linux and open-source users who prioritize privacy and transparency.
    What AI Tools Can Actually Do
    The AI writing features currently being tested are aimed at improving productivity rather than replacing human writing entirely.

    Examples include:
    Grammar and Style Improvements
    AI can analyze text for readability, awkward phrasing, and stylistic consistency.
    Paragraph Rewriting
    Users can ask the assistant to:
    Simplify text Make writing more formal or casual Expand short sections Rephrase unclear sentencesContent Assistance
    The tools can also help generate outlines, draft paragraphs, or suggest alternative wording for documents.
    Go to Full Article


  • Linux Foundation Launches Open Driver Initiative to Strengthen Hardware Support Across Linux
    by George Whittaker
    The Linux Foundation has announced a new Open Driver Initiative, a collaborative effort aimed at improving the development, maintenance, and long-term sustainability of open-source hardware drivers across the Linux ecosystem.

    The initiative reflects growing demand for better hardware compatibility in areas ranging from desktops and gaming systems to cloud infrastructure, automotive platforms, AI hardware, and next-generation networking. As Linux expands into more industries and devices, driver quality and openness have become increasingly important.
    Why Open Drivers Matter
    Hardware drivers are the bridge between the operating system and physical components such as:
    Graphics cards Wi-Fi adapters Storage controllers Network devices Embedded and automotive systems
    When drivers are open source, developers can:
    Improve compatibility more quickly Audit code for security issues Maintain support for older hardware longer Integrate drivers more cleanly into the Linux kernel
    Open drivers also reduce dependence on proprietary vendor software, which can become outdated or unsupported over time.
    What the Open Driver Initiative Aims to Do
    According to early details surrounding the Linux Foundation’s broader infrastructure efforts, the initiative is designed to encourage:
    Shared driver development standards Better collaboration between hardware vendors and kernel maintainers Open governance models for driver ecosystems Improved testing, validation, and long-term maintenance
    The effort appears aligned with the Linux Foundation’s long-standing role as a neutral organization coordinating open-source collaboration across industries.
    A Push for Industry-Wide Collaboration
    The initiative arrives at a time when Linux is increasingly used in:
    AI and high-performance computing Automotive and software-defined vehicles Telecommunications and Open RAN infrastructure Embedded devices and edge computing
    Several Linux Foundation-hosted projects already emphasize open infrastructure and hardware collaboration, including Automotive Grade Linux (AGL) and networking initiatives focused on open radio access networks.

    By launching a dedicated effort around drivers, the Linux Foundation is attempting to reduce fragmentation and improve interoperability across hardware ecosystems.
    Why This Matters for Linux Users
    For everyday Linux users, better open driver support can lead to:
    Go to Full Article


  • Canonical Unveils Ubuntu AI Strategy: Local Models, User Control, and Smarter Workflows
    by George Whittaker
    Canonical has officially revealed its long-anticipated plans to bring artificial intelligence features into Ubuntu, marking a significant shift for one of the world’s most widely used Linux distributions. Rather than rushing into the AI wave, Canonical is taking a measured, privacy-focused approach, one that aims to enhance the operating system without compromising its open-source values.

    The rollout is expected to take place gradually throughout 2026, with early features likely appearing in upcoming Ubuntu releases.
    A Gradual, Thoughtful AI Rollout
    Canonical isn’t positioning Ubuntu as an “AI-first” operating system. Instead, the company is introducing AI in stages, focusing on practical improvements rather than hype-driven features.

    The plan follows a two-phase model:
    Implicit AI features: Enhancements running quietly in the background Explicit AI features: User-facing tools and workflows powered by AI
    This approach allows Ubuntu to evolve naturally, improving existing functionality before introducing more advanced capabilities.
    Local AI First, Not the Cloud
    One of the most important aspects of Canonical’s strategy is its emphasis on local AI processing, also known as on-device inference.

    Instead of sending data to remote servers, Ubuntu will aim to:
    Run AI models directly on the user’s hardware Reduce reliance on cloud services Improve privacy and performance
    Canonical has made it clear that local inference will be the default, with cloud-based options available only when explicitly chosen by the user.

    This aligns closely with the privacy expectations of Linux users, who often prefer greater control over their data.
    What AI Features Could Look Like
    Canonical has outlined several potential use cases for AI inside Ubuntu. These include:
    Accessibility Improvements
    AI will enhance tools like:
    Speech-to-text Text-to-speech Assistive technologies
    These features aim to make Ubuntu more inclusive and easier to use for a wider range of users.
    Smarter System Assistance
    Future AI features may help users:
    Troubleshoot system issues Interpret logs and error messages Automate repetitive tasks
    This could significantly lower the learning curve for new Linux users.
    Agent-Based Automation
    Canonical is also exploring “agentic” AI workflows, where AI can take actions on behalf of the user.

    Examples include:
    Go to Full Article


  • Thunderbird 150 Lands on Linux: Smarter Encryption, Better Tools, and a Polished Experience
    by George Whittaker
    Mozilla has officially rolled out Thunderbird 150.0, the latest version of its open-source email client, bringing a mix of security-focused enhancements, usability upgrades, and workflow improvements for Linux and other platforms. Released in April 2026, this update continues Thunderbird’s steady evolution as a powerful desktop email solution.

    For Linux users, Thunderbird 150 delivers meaningful updates that improve both everyday usability and advanced email handling, especially for encrypted communication.
    Stronger Support for Encrypted Email
    One of the standout improvements in Thunderbird 150 is how it handles encrypted messages.

    Users can now:
    Search inside encrypted emails (OpenPGP and S/MIME) Generate “unobtrusive” OpenPGP signatures that appear cleaner to recipients
    These changes make encrypted communication far more practical, especially for users who rely on secure email for work or privacy-sensitive tasks.
    New Productivity and Workflow Features
    Thunderbird 150 introduces several small but impactful workflow improvements:
    A new Account Hub opens automatically on first launch, simplifying setup Recent Destinations in settings can now be sorted alphabetically Address book entries can be copied as vCard files A new custom accent color option allows interface personalization
    These updates make Thunderbird easier to configure and more flexible to use daily.
    Improved Built-In PDF Viewer
    Thunderbird’s integrated PDF viewer gets a useful upgrade: users can now reorder pages directly within the viewer.

    This is particularly helpful for:
    Managing attachments without external tools Editing documents quickly before sending Streamlining email-based workflows
    Combined with ongoing security fixes, the PDF viewer becomes both more capable and safer.
    Calendar and Interface Enhancements
    Several improvements focus on usability and accessibility:
    Calendar views now support touchscreen scrolling Fixed issues with calendar layouts and navigation Better screen reader support and accessibility fixes General UI refinements across the application
    These changes contribute to a smoother, more consistent user experience across devices.
    Bug Fixes and Stability Improvements
    Thunderbird 150 also resolves a wide range of issues, including:
    Go to Full Article


  • Linux Kernel 6.19 Reaches End of Life: Time to Move Forward
    by George Whittaker
    The Linux kernel continues its fast-paced release cycle, and with that comes an important milestone: Linux kernel 6.19 has officially reached end of life (EOL). For users and distributions still running this branch, it’s now time to upgrade to a newer kernel version.

    This isn’t unexpected, Linux 6.19 was never intended to be a long-term release, but it does serve as a reminder of how quickly non-LTS kernel branches move through their lifecycle.
    Official End of Support
    The final update in the 6.19 series, Linux 6.19.14, has been released and marked as the last maintenance version. Kernel maintainer Greg Kroah-Hartman confirmed that no further updates will follow, stating that the branch is now officially end-of-life.

    On kernel.org, the 6.19 series is now listed as EOL, meaning it will no longer receive bug fixes or security patches.
    Why 6.19 Had a Short Lifespan
    Unlike some kernel releases, Linux 6.19 was not a long-term support (LTS) version. Short-lived kernel branches are typically supported for only a few months before being replaced by newer releases.

    Linux follows a rapid development model:
    New major versions are released frequently Short-term branches receive limited updates Only selected kernels are designated as LTS for extended support
    Because of this, 6.19 was always meant to be a stepping stone rather than a long-term foundation.
    What Users Should Do Now
    With 6.19 no longer maintained, continuing to use it poses risks, especially in environments where security and stability matter.

    Recommended upgrade paths include:
    Upgrade to Linux 7.0
    The most direct path forward is the Linux 7.0 kernel series, which succeeds 6.19 and introduces new hardware support and ongoing fixes.

    This is a good option for:
    Desktop users Rolling-release distributions Users who want the latest featuresSwitch to an LTS Kernel
    For production systems, servers, or long-term stability, moving to an LTS kernel is often the better choice.

    Current LTS options include:
    Linux 6.18 LTS (supported until 2028) Linux 6.12 LTS (supported until 2028) Linux 6.6 LTS (supported until 2027)
    These versions receive ongoing security updates and are better suited for stable environments.
    Why EOL Matters
    When a kernel reaches end of life:
    Go to Full Article


  • Archinstall 4.2 Shifts to Wayland-First Profiles, Leaving X.Org Behind
    by George Whittaker
    The Arch Linux installer continues evolving alongside the broader Linux desktop ecosystem. With the release of Archinstall 4.2, a notable change has arrived: Wayland is now the default focus for graphical installation profiles, while traditional X.Org-based profiles have been removed or deprioritized.

    This move reflects a wider transition happening across Linux, one that is gradually redefining how graphical environments are built and used.
    A Turning Point for Archinstall
    Archinstall, the official guided installer for Arch Linux, has steadily improved over time to make installation more accessible while still maintaining Arch’s minimalist philosophy.

    With version 4.2, the installer now aligns more closely with modern desktop trends by emphasizing Wayland-based environments during setup, instead of offering traditional X.Org configurations as first-class options.

    This doesn’t mean X.Org is completely gone from Arch Linux, but it does signal a clear shift in direction.
    Why Wayland Is Taking Over
    Wayland has been gaining traction for years as the successor to X.Org, offering a more streamlined and secure approach to rendering graphics on Linux.

    Compared to X.Org, Wayland is designed to:
    Reduce complexity in the graphics stack Improve security by isolating applications Deliver smoother rendering and better performance Support modern display technologies like high-DPI and variable refresh rates
    As the Linux ecosystem evolves, many distributions and desktop environments are prioritizing Wayland as the default display protocol.
    What Changed in Archinstall 4.2
    With this release, users installing Arch through Archinstall will notice:
    Wayland-based desktop environments and compositors are now the primary options X.Org-centric setups are no longer emphasized in guided profiles Installation workflows better reflect modern Linux defaults
    This simplifies the installation experience for new users, who no longer need to choose between legacy and modern display systems during setup.
    What About X.Org?
    While Archinstall is moving forward, X.Org itself is not disappearing overnight.

    Many applications and workflows still rely on X11, and compatibility is maintained through XWayland, which allows X11 applications to run within Wayland sessions.

    For advanced users, Arch still provides full flexibility:
    Go to Full Article


  • OpenClaw in 2026: What It Is, Who’s Using It, and Whether Your Business Should Adopt It
    by George Whittaker
    “probably the single most important release of software, probably ever.”

    — Jensen Huang, CEO of NVIDIA


    Wow! That’s a bold statement from one of the most influential figures in modern computing.

    But is it true? Some people think so. Others think it’s hype. Most are somewhere in between, aware of OpenClaw, but not entirely sure what to make of it. Are people actually using it? Yes. Who’s using it? More than you might expect. Is it experimental, or is it already changing how work gets done? That depends on how it’s being applied. Is it more relevant for businesses or consumers right now? That’s one of the most important, and most misunderstood, questions.

    This article breaks that down clearly: what OpenClaw is, how it works, who is using it today, and where it actually creates value.

    What makes OpenClaw different isn’t just the technology, it’s where it fits. Most of the AI tools people are familiar with still require a human to take the next step. They assist, but they don’t execute. OpenClaw changes that dynamic by connecting decision-making directly to action. Once you understand that shift, the rest of the discussion, who’s using it, how it’s being deployed, and where it creates value, starts to make a lot more sense.


    Top 10 Questions About OpenClaw
    What is OpenClaw?

    OpenClaw is an open-source AI agent framework that enables large language models like Claude, GPT, and Gemini to execute real-world tasks across software systems, including APIs, files, and workflows.

    What does OpenClaw actually do?

    OpenClaw functions as an execution layer that allows AI systems to take actions, such as sending emails, updating CRM records, or running scripts, instead of only generating responses.

    Do you need to be a developer to use OpenClaw?

    No, but technical familiarity helps. Non-developers can use prebuilt workflows, while developers can customize and scale implementations more effectively.

    Is OpenClaw more suited for business or consumer use?

    OpenClaw is currently more suited for business and technical use cases where structured workflows exist. Consumer use is emerging but remains secondary.

    How is OpenClaw different from ChatGPT or Claude?

    ChatGPT and Claude generate outputs, while OpenClaw enables those outputs to trigger actions across connected systems.

    Who created OpenClaw?
    Go to Full Article


  • Linux Kernel Developers Adopt New Fuzzing Tools
    by George Whittaker
    The Linux kernel development community is stepping up its security game once again. Developers, led by key maintainers like Greg Kroah-Hartman, are actively adopting new fuzzing tools to uncover bugs earlier and improve overall kernel reliability.

    This move reflects a broader shift toward automated testing and AI-assisted development, as the kernel continues to grow in complexity and scale.
    What Is Fuzzing and Why It Matters
    Fuzzing is a software testing technique that feeds random or unexpected inputs into a program to trigger crashes or uncover vulnerabilities.

    In the Linux kernel, fuzzing has become one of the most effective ways to detect:
    Memory corruption bugs Race conditions Privilege escalation flaws Edge-case failures in subsystems
    Modern fuzzers like Syzkaller have already discovered thousands of kernel bugs over the years, making them a cornerstone of Linux security testing.
    New Tools Enter the Scene
    Recently, kernel maintainers have begun experimenting with new fuzzing frameworks and tooling, including a project internally referred to as “clanker”, which has already been used to identify multiple issues across different kernel subsystems.

    Early testing has uncovered bugs in areas such as:
    SMB/KSMBD networking code USB and HID subsystems Filesystems like F2FS Wireless and device drivers
    The speed at which these issues were discovered suggests that these new tools are significantly improving bug detection efficiency.
    AI and Smarter Fuzzing Techniques
    One of the most interesting developments is the growing role of AI and machine learning in fuzzing.

    New research projects like KernelGPT use large language models to:
    Automatically generate system call sequences Improve test coverage Discover previously hidden execution paths
    These techniques can enhance traditional fuzzers by making them smarter about how they explore the kernel’s behavior.

    Other advancements include:
    Better crash analysis and deduplication tools (like ECHO) Configuration-aware fuzzing to explore deeper kernel states Feedback-driven fuzzing loops for improved coverage
    Together, these innovations help developers focus on the most meaningful bugs rather than sifting through duplicate reports.
    Why This Shift Is Happening Now
    The Linux kernel is one of the most complex software projects in existence. With millions of lines of code and contributions from thousands of developers, manually catching every bug is nearly impossible.
    Go to Full Article


  • GNOME 50 Reaches Arch Linux: A Leaner, Wayland-Only Future Arrives
    by George Whittaker
    Arch Linux users are among the first to experience the latest GNOME desktop, as GNOME 50 has begun rolling out through Arch’s repositories. Thanks to Arch’s rolling-release model, new upstream software like GNOME arrives quickly, giving users early access to the newest features and architectural changes.

    With GNOME 50, that includes one of the most significant shifts in the desktop’s history.
    A Major GNOME Milestone
    GNOME 50, officially released in March 2026 under the codename “Tokyo,” represents six months of development and refinement from the GNOME community.

    Unlike some previous versions, this release focuses less on dramatic redesigns and more on strengthening the foundation of the desktop, improving performance, modernizing graphics handling, and simplifying long-standing complexities.

    For Arch Linux users, that translates into a more streamlined and future-ready desktop environment.
    Goodbye X11, Hello Wayland-Only Desktop
    The headline change in GNOME 50 is the complete removal of X11 support from GNOME Shell and its window manager, Mutter.

    After years of gradual transition:
    X11 sessions were first deprecated Then disabled by default And now fully removed in GNOME 50
    This means GNOME now runs exclusively on Wayland, with legacy X11 applications handled through XWayland compatibility layers.

    The result is a simpler, more modern graphics stack that reduces maintenance overhead and improves long-term performance and security.
    Improved Graphics and Display Handling
    GNOME 50 brings several key improvements to display and graphics performance:
    Variable Refresh Rate (VRR) enabled by default Better fractional scaling support Improved compatibility with NVIDIA drivers Enhanced HDR and color management
    These changes aim to deliver smoother animations, more responsive desktops, and better support for modern displays.

    For gamers and users with high-refresh monitors, these upgrades are especially noticeable.
    Performance and Responsiveness Gains
    Beyond graphics, GNOME 50 includes multiple performance optimizations:
    Faster file handling in the Files (Nautilus) app Improved thumbnail generation Reduced stuttering in animations Better resource usage across the desktop
    These refinements make the desktop feel more responsive, particularly on systems with demanding workloads or multiple monitors.
    New Parental Controls and Accessibility Features
    GNOME 50 also expands its focus on usability and accessibility.
    Go to Full Article


Page last modified on November 02, 2011, at 10:01 PM