|
1825 Monetary Lane Suite #104 Carrollton, TX
Do a presentation at NTLUG.
What is the Linux Installation Project?
Real companies using Linux!
Not just for business anymore.
Providing ready to run platforms on Linux
|
Show Descriptions... (Show All)
(Two Column)

- Debian Bookworm DSA-6259-1 PyJWT Important Authentication Flaw
It was discovered that PyJWT, a Python implementation of JSON web tokens insufficiently validated the "crit" header parameter, which could result in incomplete enforcement of authentication settings. For the oldstable distribution (bookworm), this problem has been fixed in version 2.6.0-1+deb12u1.
- Debian 11 DLA-4573-1 libpng Important Heap Disclosure Fix
A security vulnerability has been discovered in libpng, a library implementing an interface for reading and writing PNG (Portable Network Graphics) files, which could leading to corrupted chunk data and potential heap information disclosure. For Debian 11 bullseye, this problem has been fixed in version
- Debian Bookworm Linux Important Local Escalation Fix DSA-6258-1
Two vulnerabilities have been discovered in the Linux kernel that may lead to local privilege escalation. For the oldstable distribution (bookworm), these problems have been fixed in version 6.1.170-3. We recommend that you upgrade your linux packages.

- More stable kernels with partial Dirty Frag fixes
Greg Kroah-Hartman has released the 6.1.171, 5.15.205, and 5.10.255 stable kernels, quicklyfollowed by 6.1.172 and 5.15.206 kernels. This is another roundof stable kernels to provide fixes for one of the CVEs (CVE-2026-43284)assigned following the DirtyFrag and Copy Fail 2security disclosures. There is not, yet, a stable kernel with a fixfor CVE-2026-43500,though apatch to fix the second half is in the works.
- [$] Forgejo "carrot disclosure" raises security questions
An unusual, some might say hostile, approach to disclosing an allegedremote-code-execution (RCE) flaw in the Forgejo software-collaboration platform hassparked a multifaceted conversation. A so-called"carrot disclosure" in April has raised questions about theresearcher's methods of unveiling a security problem, Forgejo'ssecurity policies, and the project's overall security posture.
- killswitch for short-term emergency vulnerability mitigation
It seems that we are in for an extended period of the disclosure ofvulnerabilities before fixes become available. One possible way of copingwith this flood might be the killswitchproposal from Sasha Levin. In short, killswitch can immediately disableaccess to specific functionality in a running kernel, essentially blastinga vulnerable path (and its associated functionality) out of existence untila fix can be installed. "For most users, the cost of 'this socketfamily stops working for the day' is much smaller than the cost of runninga known vulnerable kernel until the fix land."
- [$] A 2026 DAMON update
The kernel's DAMON subsystemprovides user-space monitoring and management of system memory. DAMON isdeveloping rapidly, so an update on its progress has become a regularfeature of the annual Linux Storage,Filesystem, Memory Management, and BPF Summit. This traditioncontinued at the 2026 gathering with an update from DAMON creator SeongJaePark covering a long list of new capabilities — tiering, data attributesmonitoring, transparent huge pages, and more — being added to this subsystem.
- Security updates for Friday
Security updates have been issued by AlmaLinux (libsoup and mingw-libtiff), Debian (apache2, chromium, lcms2, libreoffice, and prosody), Fedora (openssl and perl-Starman), Oracle (git-lfs, libsoup, and perl-XML-Parser), Slackware (libgpg, mozilla, and php), SUSE (389-ds, cairo, cf-cli, chromedriver, cri-tools, freeipmi, gnutls, grafana, java-11-openjdk, java-17-openjdk, jetty-minimal, libmariadbd-devel, librsvg, mesa, mozjs52, mutt, nix, opencryptoki, python-Django, python-django, python-pytest, rmt-server, thunderbird, traefik, webkit2gtk3, wireshark, and xen), and Ubuntu (civicrm, dpkg, htmlunit, lcms2, libpng1.6, linux, linux-*, linux-azure, linux-azure-fips, linux-raspi, linux-xilinx, lua5.1, nasm, opam, openexr, openjpeg2, owslib, postfix, postfixadmin, and vim).
- Four stable kernels with partial fixes for Dirty Frag
Greg Kroah-Hartman has announced the release of the 7.0.5, 6.18.28, 6.12.87, and 6.6.138 stable kernels. These kernelscontain a partial fix for the DirtyFrag and Copy Fail 2security flaws. Kroah-Hartman has confirmedthat a second patch is required, but it is still in development and has not yet been merged.
- Dirty Frag: a zero-day universal Linux LPE
Hyunwoo Kim has announcedthe DirtyFrag security flaw, alocal-privilege-escalation (LPE) vulnerability similar to therecently disclosed Copy Failflaw:
Because the embargo has now been broken, no patches or CVEs exist for these vulnerabilities. After consultation with the linux-distros@vs.openwall.orgmaintainers, and at the maintainers' request, I am publicly releasing this Dirty Frag document.
As with the previous Copy Fail vulnerability, Dirty Frag likewise allows immediate root privilege escalation on all major distributions.
Kim, who discovered the flaw and had attempted a coordinateddisclosure set for May 12, has released the code for an exploit, as well as a examplescript to remove the vulnerable modules. A fullwrite-up, with the disclosure timeline, is also available. It'sunknown at this time whether this is an example of parallel discoveryor how the third party was able to disclose it prior to the end of theembargo. We will be following up as more information comes to light.
- [$] A new era for memory-management maintainership
On April 21, Andrew Morton letit be known that he intends to begin stepping away from themaintainership of kernel's memory-management subsystem — a responsibilityhe has carried since before memory management was even seen as its ownsubsystem. At the 2026 Linux Storage, Filesystem, Memory Management, andBPF Summit, one of the first sessions in the memory-management track wasdevoted to how the maintainership would be managed going forward. Thereare a lot of questions still to be answered.
- An update on KDE's Union style engine
Arjen Hiemstra has publishedan article on the status of the Union project: asingle system to support all of KDE's technologies used for stylingapplications.
The work on Union's Breeze implementation has progressed to thepoint where it is very hard to distinguish whether or not you arerunning the Union version. We have also tested with a bunch ofapplications and made sure that any differences were fixed. So we areat a stage where we need to get Union into the hands of more people,both to get extra people testing whether there are any major issues,but also to have interested people creating new styles.
This means that with the upcoming Plasma 6.7 release, we plan toinclude Union. Discussion is currently ongoing whether we will enableit by default, but even if not there will be a way to try it out.
See Hiemstra's introductoryarticle on Union, published in February 2025, for more about theproject and its creation. KDE 6.7 is expected to be released in mid-June.
- Security updates for Thursday
Security updates have been issued by AlmaLinux (dovecot, fence-agents, freeipmi, git-lfs, image-builder, kernel, libsoup, osbuild-composer, and python-tornado), Debian (apache2, libdatetime-timezone-perl, lrzip, tzdata, and wireshark), Fedora (dovecot, forgejo-runner, gh, gnutls, krb5, nano, pdns, pyOpenSSL, squid, vim, and xorg-x11-server-Xwayland), Mageia (graphicsmagick, kernel-linus, krb5-appl, libexif, libtiff, nano, nginx, ntfs-3g, opam, perl-Net-CIDR-Lite, perl-Starlet, perl-Starman, tcpflow, and virtualbox), Oracle (dovecot, fence-agents, freeipmi, image-builder, kernel, libcap, LibRaw, libsoup, openssh, osbuild-composer, python, python-tornado, python3, systemd, thunderbird, and tigervnc), SUSE (containerd, curl, erlang, flatpak, java-11-openjdk, java-21-openjdk, java-25-openjdk, liblxc-devel, libpng12, libthrift-0_23_0, openCryptoki, openexr, openssl-3, python3, python311-social-auth-core, rclone, skim, and thunderbird), and Ubuntu (apache2, coin3, editorconfig-core, insighttoolkit, linux, linux-aws, linux-aws-6.17, linux-gcp, linux-gcp-6.17, linux-hwe-6.17, linux-oracle, linux-realtime, linux-realtime-6.17, linux-azure, linux-azure-6.17, linux-oem-6.17, linux-azure-5.15, linux-gcp-6.8, nghttp2, python-dynaconf, slurm-wlm, swish-e, and webkit2gtk).
- [$] LWN.net Weekly Edition for May 7, 2026
Inside this week's LWN.net Weekly Edition: Front: LLMs and security; restartable sequences and TCMalloc; Fedora and GNOME bug reports; Prolly trees; Arm on s390. Briefs: NHS open source; Alpine outage; GCC 16.1; Incus 7.0 LTS; NetHack 5.0.0; PHP license; Quotes; ... Announcements: Newsletters, conferences, security updates, patches, and more.
- [$] LLM-driven security reports disrupt coordinated disclosure
Predictions that LLM tools would cause a surge in reports of security vulnerabilitieshave, unquestionably, borne out. As expected, maintainers are having to wadethrough more security reports than ever before; in addition, LLM tools aredisrupting traditional-coordinated disclosure practices as well. The method of Copy Fail's disclosure, in particular, leftvendors, projects, and users scrambling. In addition, maintainers are seeingparallel discovery of the same security flaws within the embargo window. Bothof these developments mean that coordinated security disclosures may become athing of the past.
- Incus 7.0 LTS released
Version7.0 of the Incus container andvirtual-machine management system has been released. Notable changes in thisrelease include the inclusion of a low-level backup API, the additionof basic S3 operations directly in Incus to replace the now-unmaintainedMinIO project, as well as the removal of support forcgroups v1 and xtables (iptables/ip6tables/ebtables). This is along-term-support (LTS) release, with support through June 2031.
The first 2 years will feature bug and security fixes as well as minorusability improvements, delivered through occasional point releases(7.0.x). After that initial two years, Incus 7.0 LTS will move to security onlymaintenance for the remaining of its 5 years of support.
A total of 204 individuals contributed to Incus between the 6.0 LTS and 7.0LTS releases with 45 contributing between the 6.23 and 7.0 LTS releases.
- Security updates for Wednesday
Security updates have been issued by AlmaLinux (corosync, dovecot, image-builder, python-tornado, resource-agents, and systemd), Debian (openjdk-11, openjdk-17, and pyjwt), Fedora (pdns, pyOpenSSL, and squid), Slackware (hunspell), SUSE (alloy, avahi, bubblewrap, cmctl, coredns, curl, dpkg, firefox, golang-github-prometheus-prometheus, grafana, libpng12, PackageKit, sed, and xen), and Ubuntu (docker.io-app, nghttp2, python-django, and python-mako).

- IOT-GATE-RPI5 is a Fanless Raspberry Pi CM5 Gateway with RS485 and CAN-FD
CompuLab has unveiled the IOT-GATE-RPI5, an industrial IoT edge gateway built around the Raspberry Pi Compute Module 5. The system combines the BCM2712 quad-core Cortex-A76 processor with industrial interfaces, optional cellular connectivity, and support for wide operating temperatures. The gateway is based on the Broadcom BCM2712 processor with four Cortex-A76 cores clocked at 2.4GHz, paired […]
- AMD's Local, Open-Source AI Can Now Easily Interact With Your Gmail
AMD software engineers continue rapidly advancing their open-source software efforts around local AI/LLM use on consumer-class Radeon and Ryzen hardware. AMD GAIA 0.17.6 was released on Thursday with more improvements for local AI processing on Windows, Linux, and even macOS. For those trusting enough in local LLM pipelines to do the right thing, there is even integration now for AMD GAIA to interface with your Gmail account...
- New GCC Back-End Proposed For WebAssembly
When it comes to compiling C/C++ code to WebAssembly (WASM), LLVM/Clang and other LLVM-based tooling has dominated the space. Nearly a decade ago was a proposal for a GCC WebAssembly back-end that ultimately never ended up being merged while now there is a new proposal for a WebAssembly back-end for the GNU toolchain...
- Linux Erroneously Thinks Intel Bartlett Lake CPUs Run At 7GHz
With Intel's recently-launched Bartlett Lake P-core-only processors intended for the embedded market, there is a rather surprising oversight under Linux: the Intel P-State driver reporting a 7.0+ GHz clock speed. While many would yearn for a 7GHz CPU, the Core 9 273PE where this issue was discovered in reality can only boost up to 5.7GHz for its maximum turbo frequency...
- What is DNF Package Manager
DNF is the default package manager for RHEL-based distributions like Fedora and AlmaLinux to help you manage your system packages.
- Luckfox Aura is a Linux SBC with RV1126B processor, 3 TOPS NPU, and dual CSI
Luckfox has expanded its Linux SBC lineup with the new Aura, a compact board based on the Rockchip RV1126B processor. Similar to the earlier Pico Pi and Lyra Pi series, it combines a Raspberry Pi-sized form factor with a quad-core Cortex-A53 CPU, a 3 TOPS NPU, dual MIPI CSI interfaces, and 4K H.264/H.265 video support. […]
- Cat and Tac Command Usage on Linux
The cat command is pretty useful for reading, creating, and concatenating files. While the tac command also works similarly to the cat command, which outputs the last line first.

- 10 People Called Police to Report Bigfoot Sighting in Ohio
CNN reports on a "sudden surge of claimed sightings" of "unidentified figures averaging 8 feet tall in wooded areas" along Ohio's Mahoning River."And it stopped just as quickly as it started," says Jeremiah Byron, host of the Bigfoot Society Podcast, which collected and mapped the reports .... Byron doesn't take every report at face value, making sure he talks to people directly before publicizing their claims. Once word got out about the reports in Ohio, so did the obvious fakes. "I started to get a lot of AI-generated reports in my email.It got up to the point where I was probably getting about 1,000 emails a day," he says. But when Byron spoke by phone with people who made the initial reports, they convinced him they weren't making anything up. "It was obvious they weren't just wanting to get their name out there," says Byron. "They were just freaked out by what they experienced, and they didn't want anything else to do with it." [...] Local law enforcement in Ohio also seem to be enjoying the publicity. Portage County Sheriff Bruce D. Zuchowski made a series of gag posts purporting to show the arrest of Bigfoot and his detention by Immigration and Customs Enforcement, only for the creature to escape from custody at the Canadian border... Despite the levity, the sheriff's office really did get some calls from concerned residents, Zuchowski says. "Ten individual people were like, 'Yeah I was walking my dog at 4 a.m. and I saw this hairy figure and I smelled this musty odor and there was this big thing and all of a sudden it ran,'" the sheriff told CNN affiliate WOIO in March.
Read more of this story at Slashdot.
- Newspaper Chain's Reporters Withhold Their Bylines to Protest 'AI-Assisted' Articles
A chain of 30 U.S. newspapers including the Sacramento Bee, the Miami Herald and the Idaho Statesman "has started to use a new AI tool that can summarize traditional articles and spit out different versions for different audiences," reports the New York Times. And the chain's reporters "are not happy about it."Journalists in many of the company's newsrooms are now withholding their bylines from articles created by the new tool, meaning that those articles will run with a generic credit rather than a reporter's name, as is customary. They are also labeled AI-assisted. "We don't want to put our bylines on stories we did not actually write even if they're based on our work," said Ariane Lange, an investigative reporter at the Sacramento Bee and the vice chair of the Sacramento Bee News Guild. "That in itself feels like a lie." The reporters' byline strike is one of the sharpest conflicts yet between journalists and their companies over the use of AI. Related debates are playing out in newsrooms across the country, as publishers experiment with new AI tools to streamline work that used to take hours, and some even use it to write full articles... [E]xecutives have promoted the tool internally as a way to increase the number of articles published and ultimately gain new subscribers... [Eric Nelson, the vice president of local news] said using reporters' bylines on the AI-generated articles was a way to show "authority" on Google so the search engine would rank the articles higher in the results. He also said the company was experimenting with feeding in reporters' notes to create articles. "Journalists who embrace and experiment with this tool are going to win," Nelson said in the meeting. "Journalists who are defiant will fall behind".... McClatchy's public AI policy states that the company uses AI tools to summarize articles to "help readers quickly understand the main points of a single story or catch up on multiple stories about a larger topic," and that editors review the output before publication.
Read more of this story at Slashdot.
- Why Some US Schools Are Cutting Back On the Technology They Spent Billions On
America's school districts "spent billions on technology during the pandemic," reports the Washington Post."But now some states are limiting in-school screen time because of concerns about its impact on children."Nationwide [U.S.] schools invested at least $15 billion and possibly as much as $35 billion from federal pandemic relief funds on laptops, learning software and other technology between 2020 and 2024, according to an estimate by the Edunomics Lab, an education think tank. By last school year, 88% of public schools reported in a federal survey they had given every child a laptop, tablet or similar device. Now, some states and school districts are walking back their technology use following pressure from parents who claim too much in-school screen time has zapped children's attention spans and left them worse off academically. At least a dozen states introduced or adopted policies this year that attempt to regulate screen time in schools — from prescribing limits to allowing families to opt out of virtual instruction... In Missouri, a bill would require every school district in that state to come up with a screen time policy is making its way through the state legislature. "Ed tech is just big tech in a sweater vest," said Missouri state Rep. Tricia Byrnes (R), who introduced the legislation and blames what she described as the overuse of technology for middling test scores... Complicating the issue is research that shows students do not see any academic gains when provided with laptops. A meta-analysis of studies on reading comprehension suggests paper-based texts are better than digital-based reading... A body of research has established that excessive or unstructured screen time can have detrimental effects on children, including harming language development, weakening social skills and triggering anxiety and depression. But the effects of school-issued devices and in-school usage on children's development are less understood, said Tiffany Munzer, a developmental behavioral pediatrician and digital media researcher at the University of Michigan. Some studies report that high-quality digital tools can support students' learning goals, Munzer said. But "a lot of the apps that are marketed as educational ... are not actually educational and contain a lot of commercialized content."
Read more of this story at Slashdot.
- Humanoid Robot Becomes Buddhist Monk In South Korea
A four-foot humanoid robot named Gabi has become a monk at a Buddhist temple in Seoul, participating in a modified initiation ceremony where it pledged to respect life, obey humans, act peacefully toward other robots and objects. "Robots are destined to collaborate with humans in every field in the future," Hong Min-suk, a manager at the Jogye Order, the largest sect of Buddhism in South Korea, tells the New York Times. "It will only be natural for them to be part of our festival." Smithsonian Magazine reports: For the temple, this marks the first time a robot has participated in the sugye initiation ceremony, when followers pledge their devotion to the Buddha and his teachings. Gabi -- a Buddhist name that refers to mercy, Yonhap News Agency reports -- was made by Unitree Robotics, a Chinese civilian robotics company. The model, G1, retails starting at $13,500. During the ceremony, Gabi agreed to five vows usually recited by human monks and slightly altered for the humanoid. The robot pledged to respect life, act with peace toward other robots and objects, listen to humans, refrain from acting or speaking in a deceptive manner and save energy. Gabi participated in a modified yeonbi purification ritual. While a human monk normally receives a small incense burn on the arm, instead Gabi received a lotus lantern festival sticker and a prayer bead necklace. The landmark event aligns with the promise made during a New Year's address by the Venerable Jinwoo, president of the Jogye Order of Korean Buddhism, to incorporate artificial intelligence into the Buddhist tradition. "We aim to fearlessly lead the A.I. era and redirect its achievements toward the path of attaining peace of mind and enlightenment," he said, per a statement.
Read more of this story at Slashdot.
- Fiber Optic Cables Can Eavesdrop On Nearby Conversations
sciencehabit shares a report from Science Magazine: Cold War spies planted bugs in walls, lamps, and telephones. Now, scientists warn, the cables themselves could listen in. A fiber optic technique used to detect earthquakes can also pick up the faint vibrations of nearby speech, researchers reported this week here at the general assembly of the European Geosciences Union. Freely available artificial intelligence (AI) software turned the fiber optic data into intelligible, real-time transcripts. "Not many people realize that [fiber optic cables] can detect acoustic waves," says Jack Lee Smith, a geophysicist at the University of Edinburgh who presented the result. "We show that in almost every case where you use these fibers, this could be a privacy concern." Fiber optics can pick up on sound through a technique called distributed acoustic sensing (DAS). Using a machine called an interrogator, researchers fire laser pulses down a cable and record the pattern of reflections coming back from tiny glass defects along the length of the fiber optic. When an earthquake's seismic wave crosses a section of the fiber, it stretches and squeezes the defects, leading to shifts in the reflected light that researchers can use to build a picture of an earthquake. DAS essentially turns a fiber cable into a long chain of seismometers that can detect not only earthquakes, but also the rumblings of volcanoes, cars, and college marching bands. And although scientists set up dedicated fiber lines specifically for research, DAS can also be performed on "dark fiber" -- unused strands in the web of fiber optics that runs through cities and across oceans, carrying the world's internet traffic. DAS can also be used to eavesdrop, the work of Smith and his colleagues shows. They conducted a field test using an existing DAS setup used to study coastal erosion. They set a speaker next to the cable and played pure tones, music, and speech. Human speech contains frequencies ranging from a few hundred to several thousand hertz. The low end of the range could be pulled out of the data "even without any preprocessing," Smith says. "You can easily see acoustic waves." Getting higher frequency speech took a bit of postprocessing, but it was possible. Dumping the data directly into Whisper, a free AI transcription tool, provided accurate real-time transcription. However, this technique worked only for coiled cables, exposed at the surface, at distances of up to 5 meters from the speaker. Burying the cable under just 20 centimeters of dirt was enough to muddy the speech. And straight cables -- even exposed ones right next to the speaker -- did not record speech well.
Read more of this story at Slashdot.
- NASA Keeps Track As Mexico City Sinks Into the Ground
An anonymous reader quotes a report from the Guardian: Walking into Mexico City's sprawling central Zocalo is a dizzying experience. At one end of the plaza, the capital's cathedral, with its soaring spires, slumps in one direction. An attached church, known as the Metropolitan Sanctuary, tilts in the other. The nearby National Palace also seems off-kilter. The teetering of many of the capital's historic buildings is the most visible sign of a phenomenon that has been ongoing for more than a century: Mexico City is sinking at an alarming rate. Now, the metropolis's descent is being tracked in real time thanks to one of the most powerful radar systems ever launched into space. Known as Nisar, the satellite can detect minute changes in Earth's surface, even through thick vegetation or cloud cover. "Nisar takes radar imaging observations of Earth to the next level," said Marin Govorcin, a scientist at Nasa's jet propulsion laboratory. "Nisar will see any change big or small that happens on Earth from week to week. No other imaging mission can claim this." Though not the first time that Mexico City's sinking has been observed from space, the Nisar mission has provided a greater sense of how far the sinking spreads and how it changes across different types of land than any other space-based sensor. It has also been able to penetrate areas on the outskirts of the city that were previously challenging to study because of the complex terrain. The implications of the imagery extend far beyond the Mexican capital. "This study of Mexico City speaks to the realm of possibilities that will open up thanks to the Nisar system," said Dario Solano-Rojas, an engineer at the National Autonomous University of Mexico (Unam). "And not just for sinking cities but also for studying volcanoes, for studying the deformation associated with earthquakes, for studying landslides." According to Nasa, the technology is also capable of monitoring the climate crisis, glacier sliding, agricultural productivity, soil moisture, forestry, coastal flooding and more. The Nisar system found that some parts of the city are dropping by more than 2cm a month. "First documented in 1925, the city's sinking is a result of centuries of exploitation of the groundwater," the report says. "Because Mexico City and its surrounds were built on an ancient lake bed, the soil beneath the city is extremely soft. When water is pumped out of the aquifer below, this clay-like earth compacts, resulting in a city that is quietly sinking." The crisis is also self-reinforcing: as the city sinks, aging pipes crack and leak, causing Mexico City to lose an estimated 40% of its water, even as drought and climate change make supplies more fragile.
Read more of this story at Slashdot.
- Does Fidelity's Reorganization Signal the Beginning of the End for 'Small-Team Agile'?
Longtime Slashdot reader cellocgw writes: Hiding inside another layoff report, Fidelity is reorganizing: "The changes are aimed at moving the teams away from an 'agile' makeup -- comprising smaller, siloed squads -- and toward larger teams built to move faster on projects." OMG, as they say: "Sudden outbreak of common sense." According to the Boston Globe, Fidelity is cutting about 1,000 jobs even as it plans to hire roughly 5,300 new workers, many of them early-career engineers. Half of the 3,300 new workers hired this year "will be in tech or product-related roles," the report says, noting that "about 2,000 of those jobs are currently open, and 400 of them are in tech/product-delivery." "The company also plans to add almost 2,000 new early-career workers, with the goal of making the tech and product-delivery teams more hands-on. In all, that means roughly 5,300 new jobs in the pipeline for Fidelity." The company says AI isn't driving the shift; as cellocgw noted, it's about moving toward larger teams that Fidelity says can move faster on priority projects. The financial services firm also reported a strong 2025 under CEO Abigail Johnson, with managed assets rising 19% from 2024 to $7.1 trillion and revenue climbing 15% to $37.7 billion. "Throughout the company's history, our investments in technology have fueled our growth and customer service capabilities," Johnson wrote in a letter (PDF) included in the company's annual report. "We will continue to prioritize technology initiatives that help us advance digital capabilities, simplify our technology ecosystem, and protect the firm and our customers."
Read more of this story at Slashdot.
- Micron Ships Gigantic 245TB SSD
BrianFagioli writes: Micron says it is now shipping the world's highest-capacity commercially available SSD, and the numbers are honestly hard to wrap your head around. The new Micron 6600 ION packs 245TB into a single drive and is aimed squarely at AI infrastructure, hyperscalers, and cloud providers dealing with exploding data growth. According to the company, the SSD can reduce rack counts by 82 percent compared to HDD deployments offering similar raw capacity, while also cutting power usage and cooling requirements. Micron says the drive tops out at roughly 30W, which it claims is about half the power draw of comparable hard drive setups. The announcement also feels like another warning sign for spinning disks in the enterprise. Hard drives still dominate bulk storage because of lower cost per terabyte, but SSD capacities keep climbing into territory that used to belong exclusively to HDDs. Micron is also touting major performance gains, claiming up to 84 times better energy efficiency for AI workloads and dramatically lower latency versus HDD-based systems. While nobody is dropping one of these into a home NAS anytime soon, the idea of a quarter petabyte on a single SSD no longer sounds like science fiction.
Read more of this story at Slashdot.
- New Linux 'Dirty Frag' Zero-Day Gives Root On All Major Distros
mrspoonsi shares a report: Dirty Frag is a vulnerability class, first discovered and reported by Hyunwoo Kim (@v4bel), that can obtain root privileges on major Linux distributions by chaining the xfrm-ESP Page-Cache Write vulnerability and the RxRPC Page-Cache Write vulnerability. Dirty Frag extends the bug class to which Dirty Pipe and Copy Fail belong. Because it is a deterministic logic bug that does not depend on a timing window, no race condition is required, the kernel does not panic when the exploit fails, and the success rate is very high. Because the embargo has been broken, no patch or CVE currently exists. "As with the previous Copy Fail vulnerability, Dirty Frag likewise allows immediate root privilege escalation on all major distributions, and itchains two separate vulnerabilities," Kim said. Detailed technical information can be found here. BleepingComputer notes that the two vulnerabilities chained by Dirty Frag are "now tracked under the following CVE IDs: the xfrm-ESP one was assigned CVE-2026-43284, and the RxRPC isye is now CVE-2026-43500."
Read more of this story at Slashdot.
- Thousands of Vibe-Coded Apps Expose Corporate and Personal Data On the Open Web
An anonymous reader quotes a report from Wired: Security researcher Dor Zvi and his team at the cybersecurity firm he cofounded, RedAccess, analyzed thousands of vibe-coded web applications created using the AI software development tools Lovable, Replit, Base44, and Netlify and found more than 5,000 of them that had virtually no security or authentication of any kind. Many of these web apps allowed anyone who merely finds their web URL to access the apps and their data. Others had only trivial barriers to that access, such as requiring that a visitor sign in with any email address. Around 40 percent of the apps exposed sensitive data, Zvi says, including medical information, financial data, corporate presentations, and strategy documents, as well as detailed logs of customer conversations with chatbots. "The end result is that organizations are actually leaking private data through vibe-coding applications," says Zvi. "This is one of the biggest events ever where people are exposing corporate or other sensitive information to anyone in the world." Zvi says RedAccess' scouring for vulnerable web apps was surprisingly easy. Lovable, Replit, Base44, and Netlify all allow users to host their web apps on those AI companies' own domains, rather than the users'. So the researchers used straightforward Google and Bing searches for those AI companies' domains combined with other search terms to identify thousands of apps that had been vibe coded with the companies' tools. Of the 5,000 AI-coded apps that Zvi says were left publicly accessible to anyone who simply typed their URLs into a browser, he found close to 2,000 that, upon closer inspection, seemed to reveal private data: Screenshots of web apps he shared with WIRED -- several of which WIRED verified were still online and exposed -- showed what appeared to be a hospital's work assignments with the personally identifiable information of doctors, a company's detailed ad purchasing information, what appeared to be another firm's go-to-market strategy presentation, a retailer's full logs of its chatbot's conversations with customers, including the customers' full names and contact information, a shipping firm's cargo records, and assorted sales and financial records from a variety of other companies. In some cases, Zvi says, he found that the exposed apps would have allowed him to gain administrative privileges over systems and even remove other administrators. In the case of Lovable, Zvi says he also found numerous examples of phishing sites that impersonated major corporations, including Bank of America, Costco, FedEx, Trader Joe's, and McDonald's, that appeared to have been created with the AI coding tool and hosted on Lovable's domain. "Anyone from your company at any moment can generate an app, and this is not going through any development cycle or any security check," Zvi says. "People can just start using it in production without asking anyone. And they do."
Read more of this story at Slashdot.
- Pentagon Begins Releasing New Files On UFOs
The Pentagon has begun releasing new UFO/UAP files through a newly launched public website, starting with 162 documents from agencies including the FBI, State Department, NASA, and others. Officials say more files will be released on a rolling basis. The Associated Press reports: The Pentagon has begun releasing new files on UFOs, saying members of the public can draw their own conclusions on "unidentified anomalous phenomena" like an object that a drone pilot says shone a bright light in the sky and then vanished. It said in a post on X on Friday that while past administrations sought to discredit or dissuade the American people, President Donald Trump "is focused on providing maximum transparency to the public, who can ultimately make up their own minds about the information contained in these files." It said additional documents will be released on a rolling basis. Besides the Pentagon, the effort is led by the White House, the director of national intelligence, the Energy Department, NASA and the FBI. A newly unveiled website housing the documents on unidentified anomalous phenomena, or UAPs, has a decidedly retro feel, with black-and-white military imagery of flying objects displayed prominently on the page, with statements displayed in typewriter-like font. The first release includes 162 files, such as old State Department cables, FBI documents and transcripts from NASA of crewed flights into space. One document details an FBI interview with someone identified as a drone pilot who, in September 2023, reported seeing a "linear object" with a light bright enough to "see bands within the light" in the sky. "The object was visible for five to ten seconds and then the light went out and the object vanished," according to the FBI interview. Another file is a NASA photograph from the Apollo 17 mission in 1972, showing three dots in a triangular formation. The Pentagon says in an accompanying caption that "there is no consensus about the nature of the anomaly" but that a new, preliminary analysis indicated that it could be a "physical object."
Read more of this story at Slashdot.
- Apple, Intel Have Reached Preliminary Chip-Making Agreement
Apple and Intel have reportedly reached a preliminary agreement (paywalled; alternative source) for Intel to manufacture some chips used in Apple devices, after more than a year of talks and pressure from the Trump administration. It's still unclear which Apple products would use Intel-made chips, but the deal would mark a major potential win for Intel's foundry ambitions and give Apple another manufacturing option beyond TSMC.
Read more of this story at Slashdot.
- AI Hard Drive Shortage Makes Archiving the Internet Harder
An anonymous reader quotes a report from 404 Media: Skyrocketing hard drive and storage costs caused by the AI data center boom are making it more expensive and more difficult for digital archivists, academics, Wikipedia, and hobby data hoarders to save data and archive the internet. Specific drives favored by some high profile organizations like the Internet Archive have become far more expensive or are difficult to find at all, archivists said. Over the last several months, prices for both consumer level and enterprise solid state drives, hard drives, and other types of storage have skyrocketed. As an example, a 2TB external Samsung SSD I purchased last fall for $159 now costs $575. PC Part Picker, a website that tracks the average price of different types of drives, shows a universal increase in storage prices starting in about October of last year. Prices of many of the drives it tracks have doubled or increased by more than 150 percent, and at some stores SSDs and hard drives are simply sold out. There is now even a secondary market for some SSDs, with people scalping them on eBay and elsewhere. Brewster Kahle, founder of the Internet Archive and the Wayback Machine, the most important archiving projects in the history of the internet, told 404 Media that the skyrocketing costs of storage is "a very real issue costing us time and money." "We have found that the preferred 28-30TB drives are just not available or at very high price," Kahle said. "We gather over 100 terabytes of new materials each day, and we have over 210 Petabytes of materials already archived on machines that need continuous upgrades and maintenance, so we need to constantly get new hard drives.""We are fortunate to have an active community that donates to the Archive, and we are also looking for help from hard drive manufacturers in these difficult times. We are always looking for more help," he added. "So far we have ways to work around these shortages, but it is a very real issue causing us time and money." The Wikimedia Foundation, which runs Wikipedia and various other projects, including Wikimedia Commons, an open repository of royalty free media, told 404 Media that the cost of storage has become a concern for the foundation's projects as well. "With over 65 million articles on Wikipedia alone, access to server and storage capacity is vital to us. We've certainly seen price increases since the end of 2025. These price increases are of concern to us, as with every other player in the industry. We see the primary impact in the purchase of memory and hard drives but also in terms of lead times on server deliveries and our capacity to place future orders," a Wikimedia Foundation spokesperson told us. "The Wikimedia Foundation is a non-profit, and as such how we allocate budget is very carefully considered. We maintain our own data centers to serve our users from all over the world. We're putting workarounds in place where we can, mainly involving being smart with how we prioritize investment in hardware, building in flexibility as well as extending the life of existing hardware where possible." Western Digital, one of the largest manufacturers of hard drives and other storage systems, said that it has essentially sold out of its 2026 inventory to enterprise clients, many of which run data centers. Micron, which made RAM and SSDs under the brand name Crucial, has exited the consumer market altogether because "AI-driven growth in the data center has led to a surge in demand for memory and storage. Micron has made the difficult decision to exit the Crucial consumer business in order to improve supply and support for our larger, strategic customers in faster-growing segments."
Read more of this story at Slashdot.
- Chrome Silently Installs a 4GB AI Model On Your Device Without Consent
Longtime Slashdot reader couchslug shares a report from That Privacy Guy's Alexander Hanff: Two weeks ago I wrote about Anthropic silently registering a Native Messaging bridge in seven Chromium-based browsers on every machine where Claude Desktop was installed. The pattern was: install on user launch of product A, write configuration into the user's installs of products B, C, D, E, F, G, H without asking. Reach across vendor trust boundaries. No consent dialog. No opt-out UI. Re-installs itself if the user removes it manually, every time Claude Desktop is launched. This week I discovered the same pattern, executed by Google. Google Chrome is reaching into users' machines and writing a 4GB on-device AI model file to disk without asking. The file is named weights.bin. It lives in OptGuideOnDeviceModel. It is the weights for Gemini Nano, Google's on-device LLM. Chrome did not ask. Chrome does not surface it. If the user deletes it, Chrome re-downloads it. The legal analysis is the same one I gave for the Anthropic case. The environmental analysis is new. At Chrome's scale, the climate bill for one model push, paid in atmospheric CO2 by the entire planet, is between six thousand and sixty thousand tons of CO2-equivalent emissions, depending on how many devices receive the push. That is the environmental cost of one company unilaterally deciding that two billion peoples' default browser will mass-distribute a 4GB binary they did not request.
Read more of this story at Slashdot.
- Cloudflare To Cut About 20% Workforce As AI Adoption Reshapes Operations
Cloudflare plans to cut about 20% of its workforce, or more than 1,100 employees, as it restructures around an "agentic AI-first operating model." Reuters reports: Cloudflare CEO Matthew Prince and co-founder Michelle Zatlyn said in a message to employees that the company was reimagining every team and function to operate in what they described as an agentic AI era. Cloudflare said the job cuts reflect a redesign of internal processes and roles, rather than a response to employee performance or short-term cost pressures. The company added that its own use of AI has increased more than sixfold over the past three months, prompting major changes in how teams operate.
Read more of this story at Slashdot.

- Security: Why Linux Is Better Than Windows Or Mac OS
Linux is a free and open source operating system that was released in 1991 developed and released by Linus Torvalds. Since its release it has reached a user base that is greatly widespread worldwide. Linux users swear by the reliability and freedom that this operating system offers, especially when compared to its counterparts, windows and [0]
- Essential Software That Are Not Available On Linux OS
An operating system is essentially the most important component in a computer. It manages the different hardware and software components of a computer in the most effective way. There are different types of operating system and everything comes with their own set of programs and software. You cannot expect a Linux program to have all [0]
- Things You Never Knew About Your Operating System
The advent of computers has brought about a revolution in our daily life. From computers that were so huge to fit in a room, we have come a very long way to desktops and even palmtops. These machines have become our virtual lockers, and a life without these network machines have become unimaginable. Sending mails, [0]
- How To Fully Optimize Your Operating System
Computers and systems are tricky and complicated. If you lack a thorough knowledge or even basic knowledge of computers, you will often find yourself in a bind. You must understand that something as complicated as a computer requires constant care and constant cleaning up of junk files. Unless you put in the time to configure [0]
- The Top Problems With Major Operating Systems
There is no such system which does not give you any problems. Even if the system and the operating system of your system is easy to understand, there will be some times when certain problems will arise. Most of these problems are easy to handle and easy to get rid of. But you must be [0]
- 8 Benefits Of Linux OS
Linux is a small and a fast-growing operating system. However, we can’t term it as software yet. As discussed in the article about what can a Linux OS do Linux is a kernel. Now, kernels are used for software and programs. These kernels are used by the computer and can be used with various third-party software [0]
- Things Linux OS Can Do That Other OS Cant
What Is Linux OS? Linux, similar to U-bix is an operating system which can be used for various computers, hand held devices, embedded devices, etc. The reason why Linux operated system is preferred by many, is because it is easy to use and re-use. Linux based operating system is technically not an Operating System. Operating [0]
- Packagekit Interview
Packagekit aims to make the management of applications in the Linux and GNU systems. The main objective to remove the pains it takes to create a system. Along with this in an interview, Richard Hughes, the developer of Packagekit said that he aims to make the Linux systems just as powerful as the Windows or [0]
- What’s New in Ubuntu?
What Is Ubuntu? Ubuntu is open source software. It is useful for Linux based computers. The software is marketed by the Canonical Ltd., Ubuntu community. Ubuntu was first released in late October in 2004. The Ubuntu program uses Java, Python, C, C++ and C# programming languages. What Is New? The version 17.04 is now available here [0]
- Ext3 Reiserfs Xfs In Windows With Regards To Colinux
The problem with Windows is that there are various limitations to the computer and there is only so much you can do with it. You can access the Ext3 Reiserfs Xfs by using the coLinux tool. Download the tool from the official site or from the sourceforge site. Edit the connection to “TAP Win32 Adapter [0]

- Google is tying reCAPTCHA to Google Play Services, screwing over de-Googled Android users
The ways in which Google can lock you into their ecosystem are often obvious, but sometimes, theyre incredibly sneaky and easily missed. CAPTCHA tests are annoying, but at the same time, they can help protect websites from bots. While these tests are already the bane of our internet existence, they are going to get worse for some Android users. A requirement for Google’s next-generation reCAPTCHA system will make it a lot harder for de-Googled phones to browse the web. A Reddit user has highlighted a seemingly innocuous support page for Google’s reCAPTCHA system. The page in question relates to troubleshooting reCAPTCHA verification on mobile. In the document, it says that you’ll need to use a compatible mobile device to complete verification. If you have an Android phone, then that means you’ll need to be running Google Play Services version 25.41.30 or higher. ↫ Ryan McNeal at Android Authority When was the last time you actively thought about reCAPTCHA being a Google property? Even then, when was the last time you imagined something as annoying but ultimately basic as a captcha prompt could be used to tie people to Google Play Services, and thus to blessed! Android? Every time we manage to work around one of these asinine ties to Google Play Services, another one pops up to ruin our day. Were so stupidly tied down to and entirely dependent on two very mid at best mobile operating systems, and its such a stupid own goal for especially everyone outside of the US to just sit there and do nothing about it. Worse yet, it seems were only tying ourselves down further, while paying for the privilege. At the very least we should be categorising certain services government ID services, payment services, popular messaging platforms, and a few more as vital infrastructure, and legally mandate these services have clearly defined and well-documented APIs so anyone is free to make alternative clients. The fact that many people are tied to either iOS or blessed! Android because of something as stupid as what bank they use or the level of incompetency of their government ID service should be a major crisis in any country that isnt the US. I dont want to use iOS or Android, but nobody is leaving me any choice. Its infuriating.
- Why don’t lowercase letters come right after uppercase letters in ASCII?
With that context, I always found it strange that the designers of ASCII included 6 characters after uppercase Z before starting the lowercase letters. Then it hit me: we have 26 letters in the English alphabet, plus 6 additional characters before lowercase starts: 26 + 6 = 32. If you know anything about computers, powers of 2 tend to stick out. Let’s take a look at the binary representations of some characters compared to their lowercase counterparts. ↫ Tyler Hillery I only have a middling understanding of the rest of the article and thus the ultimate reason why ASCII includes those six characters between Z and a, but I think it comes down to making certain operations on uppercase and lowercase letters specifically more elegant. In some deep crevices of my brain all of this makes sense, but I find it very difficult to truly understand and explain as someone who knows little about programming.
- Detecting (or not) the use of -l and -c together in Bourne shells
Many Bourne shells go slightly beyond the POSIX sh specification to also support a -l option that makes the shell act as a login shell. POSIXs omission of -l isnt only because it doesnt really talk about login shells at all, its also because Unix has a special way of marking login shells that goes back very far in its history. The -l option isnt necessarily what login and sshd and so on use, its something that you can use if you specifically want to get a login shell in an unusual circumstance. Bourne shells also have a -c `command stringb option that causes the shell to execute the command string rather than be interactive (this is a long standing option that is in POSIX). It may surprise you to hear that most or all Bourne shells that support -l also allow you to use -l and -c together. Basically all Bourne shells interpret this as first executing your .profile and so on, then executing the command string instead of going interactive. One use for this is to non-interactively run a command line in the context of your fully set up shell, with $PATH and other environment variables ready for use. ↫ Chris Siebenmann Now, what if you want to detect the use of these two options combined, for instance to make it so certain parts of your .profile are ignored? It turns out very few Bourne shells actually support this, and thats what Siebenmanns latest post is about.
- Fedora Project Leader says he doesnt care about the reputational damage from Fedora embracing AI!
On the Fedora forums, theres a long-running thread about a proposal for Fedora to build a variant of the distribution aimed specifically at AI!. The problem! identified in the proposal is that setting up the various parts that a developer in the AI! space needs is currently quite difficult on Fedora, and as such, a bunch of technical steps need to be taken to make this easier. Setting aside the AI! of the proposal and ensuing discussion, its actually a very interesting read, going deep into the weeds about consequential questions like building an LTS kernel on Fedora, support for out-of-tree kernel mods, and a lot more. To spoil the ending: the proposal has already been approved unanimously by the Fedora Council, meaning the efforts laid out in the proposal will be undertaken. This means that, depending on progress, well see a Fedora AI! Desktop or whatever its going to be called somewhere in the timeframe from Fedora 45 to Fedora 47. As a Fedora user on all my machines, Im obviously not too happy about this, since Id much rather the scarce resources of a project like Fedora goes towards things not as ethically bankrupt, environmentally destructive, and artistically deficient as AI!, but in the end its a project owned and controlled by IBM, so its not exactly unexpected. What really surprised me in this entire discussion is a post by Fedora Project Leader Jef Spaleta, responding to worries people in the thread were having about such a big AI! undertaking under the Fedora branding causing serious reputational damage to Fedora as a whole. These concerns are clearly valid, as people really fucking hate AI!, doubly so in the open source community whose work especially AI! coding tools are built on without any form of consent. As such, Fedora undertaking a big AI! desktop project is bound to have a negative impact on Fedoras image. Just look at what aggressively pushing Copilot has done to Windows 11s already shit reputation. Spaleta, however, just doesnt care. Literally. As the Fedora Project Leader, I am absolutely not concerned about the reputational damage to this project that comes with setting up an entirely new output attractive to developers who want to make use of Ai tools. ↫ Jef Spaleta Ive been looking at this line on and off for a few days now, and I just cant wrap my head around how the leader of an open source project built on and relying on the free labour of thousands of contributors says he doesnt care about reputational damage to the project hes leading. Effective and capable open source contributors are not exactly a commodity, and a lot of the decisions they make about what projects to donate their time to are based on vibes and personal convictions you cant really pay them to look the other way. Saying you dont care about reputational damage to your huge open source project seems rather shortsighted, but of course, I dont lead a huge open source project so what do I know? In the linked thread alone, one long-time Fedora contributor, Fernando Mancera, already decided to leave the project on the spot, and I have a sneaking suspicion he wont be the last. AI! is a deeply tainted hype on many levels, and the more you try to chase this dragon, the more capable people youll end up chasing away.
- Redox gets partial window pixel updating, tmux, and more
Another month, another progress report, Redox, etc. etc., you know the drill by now. This past month Redox saw improved booting on real hardware by making sure the boot process continues even if certain drivers fail or become blocked. Thanks to some changes on the RISC-V side, running Redox on real RISC-V hardware has also improved. Furthermore, tmux has been ported to Redox, CPU time reporting has been improved, and Orbital, Redox desktop environment, gianed support for partial window pixel updating, which should increase UI performance. On top of that, theres a brand new web user interface to browse Redox packages (x86-64, i586, ARM64 (aarch64), and RISC-V (riscv64gc)), as well as the usual list of improvements to the kernel, drivers, relibc, and many more areas of the operating system.
- Setting up a Sun Ray server on OpenIndiana Hipster 2025.10
Time for another Sun Ray blog post! Ive had a few people email me asking for help setting up a Sun Ray server over the last few months, and despite my attempts to help them get it going theres been mixed results with running SRSS on OpenIndiana Hipster 2025.10. my Sun Ray server is still on an earlier OI snapshot, so I figured it was about time to try to actually follow the new guides myself. ↫ The Iris System Ever since my spiraling down the Sun rabbit hole late last year, Ive tried for a few times now to get the x86 version of OpenIndiana and Oracle Solaris working on any of my machines, exactly for the purposes of setting up a modern Sun Ray server. Sadly, none of my machines are compatible with any illumos distribution or Oracle Solaris, so Ive been shit out of luck trying to get this side project off the ground. My Ultra 45 is sadly also not supported by any SPARC version of illumos or Oracle Solaris, so unless I buy even more hardware, my dream of a modern Sun Ray setup will have to wait. Of course, virtualisation is an option for many, and thats exactly what this particular guide is about: setting up OpenIndiana on a Proxmox virtual machine. I actually have a Proxmox machine up and running and could do this too, but Im a sucker for running stuff like this on real hardware. Yes, that makes my life more complicated and difficult, and no, its not more noble or real or hardcore its just a preference. Still, for normal people who pick up a Sun Ray or two on eBay for basically nothing, running OpenIndiana in a virtual machine is the smart, reasonable, and effective option.
- My favorite device is a Chromebook, without ChromeOS!
If youre sick of Chrome OS on your Chromebook, or can find a Chromebook for cheap somewhere but dont actually want to use Chrome OS, have you considered postmarketOS? Since I was kind frustrated with ChromeOS, I decided to take a look at something that I knew supported my Lenovo Duet 3 for some time: postmarketOS. For those who dont know, postmarketOS is an Alpine Linux based-distro focused in replacing the original OS from old phones (generally running Android) with a true! Linux distro. They also seem to support some Chromebooks because of their unique architecture and, luckily, they support my device under the google-trogdor platform. ↫ kokada PostmarketOS is aimed at smartphones primarily, but supports other formfactors just fine as well. The Duet 3 is one of the tablet-like devices it supports, and it seems most things are working quite well. In fact, judging by the postmarketOS wiki, quite a few Chromebooks have good support, and with Chromebooks being cheap and dime-a-dozen on eBay and similar auction sites, it seems like a great way to get started with what is trying to become a true Linux for smartphones.
- The text mode lie: why modern TUIs are a nightmare for accessibility
There is a persistent misconception among sighted developers: if an application runs in a terminal, it is inherently accessible. The logic assumes that because there are no graphics, no complex DOM, and no WebGL canvases, the content is just raw ASCII text that a screen reader can easily parse. The reality is different. Most modern Text User Interfaces (TUIs) are often more hostile to accessibility than poorly coded graphical interfaces. The very tools designed to improve the Developer Experience (DX) in the terminal—frameworks like Ink (JS/React), Bubble Tea (Go), or tcell—are actively destroying the experience for blind users. ↫ Casey Reeves The core reason should be obvious: the command-line interface, at its core, is just a stream of data with the newest data at the bottom, linearly going back in time as you go up. Any screen reader can deal with this fairly easily, and while I personally have no need for such a tool, Ive heard from those that do that kernel-level screen readers are quite good at what they do. TUIs, or text-based user interfaces, made with modern frameworks are actually very different: theyre 2D grid of pixels, where every character cell is a pixel. abandons the temporal flow for a spatial layout.! It should become immediately obvious that screen readers wont really know what to do with this, and Reeves gives countless examples, but the short version is this: the cursor jumps all over the place with every screen update, which makes screen readers go nuts. Various older TUIs, made in a time well before these modern TUI frameworks came about, were designed in a much more terminal-friendly way, or give you options to hide the cursor to solve the problem that way. Irssi, for example, uses VT100 scrolling regions instead of redrawing the whole screen every time something changes. I had never really stopped to think about TUIs and screen readers, as is common among us sighted people. The problems Reeves describes seem to stem not so much from TUIs being inherently inaccessible, but from modern frameworks not actually making use of the terminals core feature set. I really hope this Reeves article shines a light on this problem, and that the people developing these modern TUIs start taking accessibility more seriously.
- Using duplicity to back up your FreeBSD desktop
Backing up in modern times, we’ve had ZFS snapshots and replication to make this task extremely easy. However, you may not have access to another ZFS endpoint for replication, need to diversify risk by using a non-ZFS tool for backup, or are simply using UFS2, living the old skool life. For these situations, my first recommendation is to lean on Tarsnap for its ease of use and simplicity, making restoration just as easy as backing up. But some situations call for a different approach. Maybe you have a strict firewall at your company that doesn’t allow Tarsnap data streams to egress from your corporate network, or you have internal/easy access to storage endpoints, such as S3-compatible object storage or a large-file storage location with SFTP access. When you are faced with the latter, the duplicity (sysutils/duplicity in ports) utility is available as an easily installable package onto your FreeBSD system. ↫ Jason Tubnor at the FreeBSD Foundation The rest of the article explains how to use duplicity on FreeBSD for the purpose described above.
- Testing MacOS on the Apple Network Server 2.0 ROMs
Earlier this year, Mac OS and Windows NT-capable ROMs were discovered for Apple’s unique AIX Network Server. Cameron Kaiser has since spent more time digging into just how capable these ROMs are, and has published another one of his detailed stories about his efforts. Well, thanks to Jeff Walther who generously built a few replica ROM SIMMs for me to test, we can now try the 2.0! MacOS ROMs on holmstock, our hard-working Apple Network Server 700 test rig (stockholm, my original ANS 500, is still officially a production unit). And there are some interesting things to report, especially when we pit the preproduction ROMs and this set head-to-head in MacBench, and even try booting Rhapsody on it. ↫ Cameron Kaiser A great read, as always.
- Windows gets a new Run dialog
With Windows being as old and long-running as it is, theres a ton of old and outdated bits and pieces lurking in every nook and cranny. I have always found these old relics fascinating, especially now that over the past few years, Microsoft has attempted to replace some of those bits and pieces with modern replacements (not always to great success, but thats another story). One of those parts of the UI thats been virtually unchanged since the release of Windows 95 is the Run dialog, but thats about to change: Microsoft has released a completely new Run dialog to early testers. Windows Run, also known as the Run dialog, is a surface that has been around for over 30 years. It has become a heavily relied upon tool for developers and advanced users alike. Users have decades of muscle memory where they hit Win+R, navigate through their Run history, and hit Enter to quickly access various paths and tools. We all have our favorite tool we launch there as well. For us, some of our favorites are wt (Windows Terminal), mstsc (Remote Desktop) and winword (Microsoft Word). But it’s more than jUsT a TeXt BoX tHaT rUnS tHiNgS. The Run dialog can handle navigating both local and network file paths as well. And everything it does, it does fast. Win+R opens the run dialog seemingly instantly. If we wanted to modernize the Run Dialog to fit the modern Windows 11 design style, we had to make sure it did everything just as well as before. We needed to maintain the same performance while also keeping the user interface minimal, just as Windows 95 intended. ↫ Clint Rutkas at the Microsoft Dev Blogs The new Run dialog looks like it belongs in Windows 11, which is a nice improvement, but the most important part is that they actually seem to have made it a little faster. Sure, they may have only shaved off a few milliseconds from its opening time, but considering virtually everything else theyve touched in Windows over the years got considerably slower, thats a good showing for Microsoft. The new feature theyve added is that by typing ~\, you can open your home directory. The one casualty is the browse button, which according to Microsofts data, literally nobody ever used. I know its just a small thing and in the end not even a remotely consequential one, but with an operating system as old and storied as Windows, replacing these ancient parts that millions of people rely on every day absolutely fascinates me. There must be a considerable amount of pressure on the people developing something like this new Run dialog, especially with Windows reputation being at one of its lowest points, so its good to see them being able to deliver. The new Run dialog is available today for testers, and if youre on the Windows Insider Experimental Channel, you can enable it in Settings > System > Advanced. Coincidentally, on my Windows 11 machine that I use for just one stupid video game, this Advanced page displays a loading spinner for five minutes and then just dies. Also, Notepad wont start (one time it showed this dialog), and using the terminal to load it causes the old Win32 version of Notepad to open after 5 minutes of waiting, which then hangs and crashes. People pay money for this.
- GNOME is good, actually
While Im normally a KDE user, I do keep close tabs on various other desktop environments, and install and set them up every now and then to see how theyre fairing, what improvements theyve made, and ultimately, if my preference for KDE is still warranted. This usually means setting up a nice OpenBSD installation for Xfce, Fedora for GNOME, and less often others for some of the more niche desktop environments. Since GNOME 50 was just released, guess whos time in the round is up? Since everybodys already made up their mind about their preferred desktop eons ago, with upsides and downsides debated far past their expiration date, Im not particularly interested in reviewing desktop environments or Linux distributions. However, after asking around on Fedi, it seemed there was quite a bit of interest in an article detailing how I set up GNOME, what changes I make to the defaults, which extensions I use, what tweaks I apply, and so on. Of course, everything described in this article is highly personal, and Im not arguing that this is the optimal way to tweak GNOME, that the extensions I use are the best ones, or that any visual modifications I make are better than whatever defaults GNOME uses. No, my goal with this article is twofold: one, to highlight that GNOME is a lot more configurable, extensible, and malleable than common wisdom on the internet would have you believe. Its not KDE or one of those cobbled-together tiling Wayland desktops, but its definitely not as rigid as you might think. And two, that GNOME is good, actually. Tools of the trade The first thing I do is install a few crucial tools that make it easier to modify and tweak GNOME. I really dislike lists in articles, but I will begrudgingly use one here: After installing all of these tools, the actual tweaking can commence. Visual tweaks I didnt use to like GNOMEs Adwaita visual style, but over the years, it started growing on me to the point where I dont actively dislike it anymore. With the arrival of libadwaita, it has also become effectively impossible to theme modern GNOME applications, so even if you do change to something else, many of your applications wont follow along. If consistency is something you care about, youll stick to Adwaita, but that leaves one problem unresolved: applications that still use GTK3. These applications will follow a much older version of Adwaita, making them stand out like eyesores among all the modern GTK4 stuff. Luckily, since GTK3 applications are still properly themable, this is easily fixed: just install the adw-gtk3 theme, either by hand, or through your distributions repositories. To enable it, first install the user themes extension through Extension Manager, and then enable the theme in GNOME Tweaks for Legacy Applications!. Any potential GTK3 applications you still use will now integrate nicely with modern libadwaita applications. The one part of GNOME I really do deeply dislike is its icon theme. I cant quite explain why I dislike this icon set so much, but it runs deep, so one of the very first things I do is replace the default GNOME icon set with my personal favourite, Qogir. This is a popular icon set, so its usually available in your distributions repositories, but I always install it from its GitHub page. Changing GNOMEs icon set is as simple as selecting it in GNOME Tweaks. You cant get much more personal taste than an icon set, and there are dozens of amazing sets to choose from in the Linux world. Changing them out and trying out new ones is stupidly easy, and its definitely worth looking at a few that might be more pleasing to you than GNOMEs (or KDEs) default. Lastly, I open Add Water and enable the amazing GNOME theme for LibreWolf. Add Water basically makes this as easy as flipping a switch, so theres no need to copy any files into your LibreWolf profile or whatever. The application also provides a few more small tweaks to fiddle with, like enabling standard tab widths so tabs dont grow and shrink as you close and open tabs, moving the bookmarks bar below the tab bar, and many more. Extensions Since the release of GNOME 3 in 2011, extensions have been the most capable way to modify GNOMEs look, behaviour, and feature set. As far as I can tell, while the extension framework is an official part of the GNOME Shell, the extensions themselves are all third-party and not part of a vanilla GNOME installation. By now, there are over 2800 listed extensions, but that number includes abandoned extensions so its hard to determine the actual number of currently-maintained ones. Whatever the actual number is, theres bound to be things in there youre going to want to use. Here are the extensions I have installed. Lets just start at the top and work our way down. I guess Im forced to do another list. There are countless more extensions to choose from, and youre definitely going to find things you never even thought could be useful. Miscellaneous tweaks Theres a few other things I modify. In GNOME Tweaks, I make it so that double-clicking a windows titlebar minimises it while right-clicking it lowers it; two features I picked up during my years as a BeOS user that I absolutely refuse to give up. I configure the dock from Dash to Dock so that it always remains on top and never hides itself, no matter the circumstances. In Settings, I disable virtual desktops entirely (I dont like virtual desktops), and I make sure tap-to-click is disabled (if Im on a laptop). GNOME is good, actually After making all of these changes, I feel quite comfortable using GNOME, at least on my laptop. Its a nice, coherent experience, and offers what is probably the most polished graphical user interface you can find on Linux, even if it isnt the most full-featured. The third-party application ecosystem, through modern
- How fast is a macOS VM, and how small could it be?
To assess how small a macOS VM could be, I ran the same VM of macOS 26.4.1 on progressively smaller CPU core and memory allocations, using my virtualiser Viable. The VM’s display window was set to a standard 1600 x 1000, and I ran Safari through its paces and performed some lightweight everyday tasks, including Storage analysis in Settings. Starting with 4 virtual cores and 8 GB vRAM, where the VM ran perfectly briskly with around 5 GB of memory used, I stepped down to 3 cores and 6 GB, to discover that memory usage fell to 3.9 GB and everything worked well. With just 2 cores and 4 GB of memory only 3.1 GB of that was used, and the VM continued to handle those lightweight tasks normally. ↫ Howard Oakley This is good news for people interested in the MacBook Neo who may also want to run a macOS virtual machine on it.
- Email is crazy
Email is like those creaking old Terminators from the ’70s which continue to function without complaining. Designed for a world that doesn’t exist anymore, it has optional encryption, no built-in auth, three⁺ retrofitted security layers bolted on top, an unstandardized filtering layer and many more quirks. Yet billions of emails arrive correctly every single day. Email is not elegant but nonetheless it is Lindy. In the new age of agentic AI, we can only expect it to metamorphose into another dimension. ↫ Saurabh Sam! Khawase The fact that email is as complicated as it is bad enough, but having it be so dominantly controlled by only a few large gatekeepers like Google and Microsoft surely isnt helping either. I feel like email is no longer really a technology individuals can actively partake in at every level; it feels much more like WhatsApp or iMessage or whatever in that we just get to send messages, and thats it. Running your own mail sever isnt only a complex endeavour, its also a continuous cat-and-mouse game with companies like Google and Microsoft to ensure you dont end up on some shitlist and your emails stop arriving. I settled on Fastmail as my email service, and it works quite well. Still, I would love to be able to just run my own email server, or have some of my far more capable friends run one for a small group of us, but its such a daunting and unpleasant effort few people seem to have the stomach and perseverance for it.
- The day I logged 1 in every 2000 public IPv4: visualizing the AI scraper DDoS
What if you run a few online services for you and your friends, like a small git instance and a grocery list service, but you get absolutely hammered by AI! scrapers? I cannot impress upon you, reader, that this is not only an attack that is coordinated, it is an attack that is distributed. I run a small set of services, basically only for me and my friends. I am not a hyperscaler, I am not a tech company, I am not even a small platform. I have a git forge where I put the shit I make, and a couple other services where me and my friends backup our files or write our grocery lists. I am not fucking Meta and I cannot scale the fuck up just because OpenAI or Anthropic or Meta or whoever is training a model that weeks wants to suck all the content out of my VPS ONCE MORE until it’s dry. ↫ lux at VulpineCitrus So how much traffic did the author of this piece, lux, get from AI! scraping bots? Within a time period of 24 hours, they were hammered by 2040670 unique IP addresses, 98% of which were IPv4 addresses, which means that 1 out of every 2000 publicly available IPv4 addresses were involved in the scraping. Together, they performed over 5 million requests. And just to reiterate: they were scraping a few very small, friends-only services run by some random person. This is absolutely insane. If, at this point in time, with everything that we know about just how deeply unethical every single aspect of AI! is, youre still using and promoting it, what is wrong with you? If youre so addicted to your AI! girlfriends unending stream of useless, forgettable sycophantic slop, despite being aware of the damage youre doing to those around you, theres something seriously wrong with you, and you desperately need professional help. You dont need any of this. The world doesnt need any of this. Nobody likes the slop AI! regurgitates, and nobody likes you for enabling it. Get help.
- Earliest 86-DOS and PC-DOS code released as open source
Microsoft is continuing its efforts to release early versions of DOS as open source, and today weve got a special one. We’re stoked today to showcase some newly available source code materials that provide an even earlier look into the development of PC-DOS 1.00, the first release of DOS for the IBM PC. A dedicated team of historians and preservationists led by Yufeng Gao and Rich Cini has worked to locate, scan, and transcribe the stack of DOS-era source listings from Tim Paterson, the author of DOS. The listings include sources to the 86-DOS 1.00 kernel, several development snapshots of the PC-DOS 1.00 kernel, and some well-known utilities such as CHKDSK. Not only were these assembler listings, but there were also listings of the assembler itself! This work offers rare insight into how MS-DOS/PC-DOS came to be, and how operating system development was done at the time, not as it was later reconstructed. ↫ Stacey Haffner and Scott Hanselman Its wild that the source code had to be transcribed from paper, including notes and changes. You can find more information about the process on Gao’s website and Cini’s website.

- Linux 7.1-rc2 Released with Driver Fixes, Steam Deck OLED Audio Repair, and Growing AI Patch Trends
by George Whittaker Linus Torvalds has officially released Linux kernel 7.1-rc2, the second release candidate in the Linux 7.1 development cycle. While Torvalds described the update as a “fairly normal” RC release, the kernel includes a broad collection of driver fixes, subsystem cleanups, and stability improvements that continue shaping the next major Linux kernel release.
Although still an early testing version intended mainly for developers and enthusiasts, Linux 7.1-rc2 already delivers several notable fixes—especially for graphics hardware, networking, and gaming devices like the Steam Deck OLED. A Strange-Looking Release—But for a Good Reason One of the first things Torvalds mentioned in the release announcement was the unusually large patch statistics. At first glance, the release appears much larger than expected, but there’s an explanation behind the inflated numbers.
Much of the activity comes from a large cleanup effort in the KVM selftests subsystem, where developers renamed variables and types to better match Linux kernel coding conventions. Because thousands of lines were renamed rather than fundamentally rewritten, the patch count looks dramatic even though the underlying functional changes are relatively modest.
Torvalds specifically advised testers not to overreact to the “big and strange” diff statistics. Graphics and Driver Fixes Take Center Stage As is common during early release candidates, a large portion of the work in Linux 7.1-rc2 focuses on hardware drivers. GPU and networking drivers account for a significant share of the meaningful fixes in this release.
Notable improvements include: Additional fixes for AMD GPU support Intel Xe graphics driver adjustments and tuning Networking stability improvements Filesystem fixes, including NTFS driver updates Memory leak patches and race-condition corrections These kinds of updates are critical during the RC phase because they help stabilize hardware compatibility before the final release reaches mainstream distributions. Steam Deck OLED Audio Finally Gets Fixed One of the more interesting fixes in Linux 7.1-rc2 addresses a long-standing issue affecting the Steam Deck OLED. According to reports, audio support for Valve’s handheld had been broken in the mainline Linux kernel for nearly two years, forcing Valve and some handheld-focused distributions to carry their own downstream patches and workarounds.
With Linux 7.1-rc2, an upstream fix for the audio issue has finally landed, potentially simplifying support for Linux gaming handhelds moving forward.
For Linux gamers and portable gaming enthusiasts, this is one of the more practical improvements included in the release candidate. Go to Full Article
- LibreOffice 26.4 Beta Experiments with AI Writing Features and Smarter Editing Tools
by George Whittaker The upcoming LibreOffice 26.4 Beta is introducing early AI-powered writing capabilities, signaling a new direction for the open-source office suite. While LibreOffice has traditionally focused on privacy, local processing, and open standards, the beta release shows that The Document Foundation is now exploring how artificial intelligence can assist users without fully embracing cloud-dependent ecosystems.
The result is a cautious but notable step toward AI-enhanced productivity on Linux and other desktop platforms. AI Writing Assistance Comes to LibreOffice One of the biggest additions connected to LibreOffice 26.4 Beta is expanded support for AI-assisted writing tools through integrations such as WritingTool, an open-source LibreOffice extension designed to enhance editing workflows.
These AI features focus on practical writing assistance rather than aggressive automation. Current capabilities include: Grammar and style suggestions Paragraph rewriting and refinement Text expansion and summarization Translation assistance AI-assisted content generation Unlike many proprietary AI platforms, these tools can operate using local AI models, allowing users to avoid sending documents to external cloud services. A Privacy-Focused Approach to AI LibreOffice’s AI direction differs from the strategies used by many commercial office suites. Instead of tightly integrating mandatory cloud AI services, the project appears focused on: Optional AI functionality User-controlled integrations Support for local inference servers Compatibility with self-hosted AI solutions The WritingTool project specifically highlights support for local AI backends and OpenAI-compatible APIs, including self-hosted tools like LocalAI.
This approach aligns closely with the values of many Linux and open-source users who prioritize privacy and transparency. What AI Tools Can Actually Do The AI writing features currently being tested are aimed at improving productivity rather than replacing human writing entirely.
Examples include: Grammar and Style Improvements AI can analyze text for readability, awkward phrasing, and stylistic consistency. Paragraph Rewriting Users can ask the assistant to: Simplify text Make writing more formal or casual Expand short sections Rephrase unclear sentencesContent Assistance The tools can also help generate outlines, draft paragraphs, or suggest alternative wording for documents. Go to Full Article
- Linux Foundation Launches Open Driver Initiative to Strengthen Hardware Support Across Linux
by George Whittaker The Linux Foundation has announced a new Open Driver Initiative, a collaborative effort aimed at improving the development, maintenance, and long-term sustainability of open-source hardware drivers across the Linux ecosystem.
The initiative reflects growing demand for better hardware compatibility in areas ranging from desktops and gaming systems to cloud infrastructure, automotive platforms, AI hardware, and next-generation networking. As Linux expands into more industries and devices, driver quality and openness have become increasingly important. Why Open Drivers Matter Hardware drivers are the bridge between the operating system and physical components such as: Graphics cards Wi-Fi adapters Storage controllers Network devices Embedded and automotive systems When drivers are open source, developers can: Improve compatibility more quickly Audit code for security issues Maintain support for older hardware longer Integrate drivers more cleanly into the Linux kernel Open drivers also reduce dependence on proprietary vendor software, which can become outdated or unsupported over time. What the Open Driver Initiative Aims to Do According to early details surrounding the Linux Foundation’s broader infrastructure efforts, the initiative is designed to encourage: Shared driver development standards Better collaboration between hardware vendors and kernel maintainers Open governance models for driver ecosystems Improved testing, validation, and long-term maintenance The effort appears aligned with the Linux Foundation’s long-standing role as a neutral organization coordinating open-source collaboration across industries. A Push for Industry-Wide Collaboration The initiative arrives at a time when Linux is increasingly used in: AI and high-performance computing Automotive and software-defined vehicles Telecommunications and Open RAN infrastructure Embedded devices and edge computing Several Linux Foundation-hosted projects already emphasize open infrastructure and hardware collaboration, including Automotive Grade Linux (AGL) and networking initiatives focused on open radio access networks.
By launching a dedicated effort around drivers, the Linux Foundation is attempting to reduce fragmentation and improve interoperability across hardware ecosystems. Why This Matters for Linux Users For everyday Linux users, better open driver support can lead to: Go to Full Article
- Canonical Unveils Ubuntu AI Strategy: Local Models, User Control, and Smarter Workflows
by George Whittaker Canonical has officially revealed its long-anticipated plans to bring artificial intelligence features into Ubuntu, marking a significant shift for one of the world’s most widely used Linux distributions. Rather than rushing into the AI wave, Canonical is taking a measured, privacy-focused approach, one that aims to enhance the operating system without compromising its open-source values.
The rollout is expected to take place gradually throughout 2026, with early features likely appearing in upcoming Ubuntu releases. A Gradual, Thoughtful AI Rollout Canonical isn’t positioning Ubuntu as an “AI-first” operating system. Instead, the company is introducing AI in stages, focusing on practical improvements rather than hype-driven features.
The plan follows a two-phase model: Implicit AI features: Enhancements running quietly in the background Explicit AI features: User-facing tools and workflows powered by AI This approach allows Ubuntu to evolve naturally, improving existing functionality before introducing more advanced capabilities. Local AI First, Not the Cloud One of the most important aspects of Canonical’s strategy is its emphasis on local AI processing, also known as on-device inference.
Instead of sending data to remote servers, Ubuntu will aim to: Run AI models directly on the user’s hardware Reduce reliance on cloud services Improve privacy and performance Canonical has made it clear that local inference will be the default, with cloud-based options available only when explicitly chosen by the user.
This aligns closely with the privacy expectations of Linux users, who often prefer greater control over their data. What AI Features Could Look Like Canonical has outlined several potential use cases for AI inside Ubuntu. These include: Accessibility Improvements AI will enhance tools like: Speech-to-text Text-to-speech Assistive technologies These features aim to make Ubuntu more inclusive and easier to use for a wider range of users. Smarter System Assistance Future AI features may help users: Troubleshoot system issues Interpret logs and error messages Automate repetitive tasks This could significantly lower the learning curve for new Linux users. Agent-Based Automation Canonical is also exploring “agentic” AI workflows, where AI can take actions on behalf of the user.
Examples include: Go to Full Article
- Thunderbird 150 Lands on Linux: Smarter Encryption, Better Tools, and a Polished Experience
by George Whittaker Mozilla has officially rolled out Thunderbird 150.0, the latest version of its open-source email client, bringing a mix of security-focused enhancements, usability upgrades, and workflow improvements for Linux and other platforms. Released in April 2026, this update continues Thunderbird’s steady evolution as a powerful desktop email solution.
For Linux users, Thunderbird 150 delivers meaningful updates that improve both everyday usability and advanced email handling, especially for encrypted communication. Stronger Support for Encrypted Email One of the standout improvements in Thunderbird 150 is how it handles encrypted messages.
Users can now: Search inside encrypted emails (OpenPGP and S/MIME) Generate “unobtrusive” OpenPGP signatures that appear cleaner to recipients These changes make encrypted communication far more practical, especially for users who rely on secure email for work or privacy-sensitive tasks. New Productivity and Workflow Features Thunderbird 150 introduces several small but impactful workflow improvements: A new Account Hub opens automatically on first launch, simplifying setup Recent Destinations in settings can now be sorted alphabetically Address book entries can be copied as vCard files A new custom accent color option allows interface personalization These updates make Thunderbird easier to configure and more flexible to use daily. Improved Built-In PDF Viewer Thunderbird’s integrated PDF viewer gets a useful upgrade: users can now reorder pages directly within the viewer.
This is particularly helpful for: Managing attachments without external tools Editing documents quickly before sending Streamlining email-based workflows Combined with ongoing security fixes, the PDF viewer becomes both more capable and safer. Calendar and Interface Enhancements Several improvements focus on usability and accessibility: Calendar views now support touchscreen scrolling Fixed issues with calendar layouts and navigation Better screen reader support and accessibility fixes General UI refinements across the application These changes contribute to a smoother, more consistent user experience across devices. Bug Fixes and Stability Improvements Thunderbird 150 also resolves a wide range of issues, including: Go to Full Article
- Linux Kernel 6.19 Reaches End of Life: Time to Move Forward
by George Whittaker The Linux kernel continues its fast-paced release cycle, and with that comes an important milestone: Linux kernel 6.19 has officially reached end of life (EOL). For users and distributions still running this branch, it’s now time to upgrade to a newer kernel version.
This isn’t unexpected, Linux 6.19 was never intended to be a long-term release, but it does serve as a reminder of how quickly non-LTS kernel branches move through their lifecycle. Official End of Support The final update in the 6.19 series, Linux 6.19.14, has been released and marked as the last maintenance version. Kernel maintainer Greg Kroah-Hartman confirmed that no further updates will follow, stating that the branch is now officially end-of-life.
On kernel.org, the 6.19 series is now listed as EOL, meaning it will no longer receive bug fixes or security patches. Why 6.19 Had a Short Lifespan Unlike some kernel releases, Linux 6.19 was not a long-term support (LTS) version. Short-lived kernel branches are typically supported for only a few months before being replaced by newer releases.
Linux follows a rapid development model: New major versions are released frequently Short-term branches receive limited updates Only selected kernels are designated as LTS for extended support Because of this, 6.19 was always meant to be a stepping stone rather than a long-term foundation. What Users Should Do Now With 6.19 no longer maintained, continuing to use it poses risks, especially in environments where security and stability matter.
Recommended upgrade paths include: Upgrade to Linux 7.0 The most direct path forward is the Linux 7.0 kernel series, which succeeds 6.19 and introduces new hardware support and ongoing fixes.
This is a good option for: Desktop users Rolling-release distributions Users who want the latest featuresSwitch to an LTS Kernel For production systems, servers, or long-term stability, moving to an LTS kernel is often the better choice.
Current LTS options include: Linux 6.18 LTS (supported until 2028) Linux 6.12 LTS (supported until 2028) Linux 6.6 LTS (supported until 2027) These versions receive ongoing security updates and are better suited for stable environments. Why EOL Matters When a kernel reaches end of life: Go to Full Article
- Archinstall 4.2 Shifts to Wayland-First Profiles, Leaving X.Org Behind
by George Whittaker The Arch Linux installer continues evolving alongside the broader Linux desktop ecosystem. With the release of Archinstall 4.2, a notable change has arrived: Wayland is now the default focus for graphical installation profiles, while traditional X.Org-based profiles have been removed or deprioritized.
This move reflects a wider transition happening across Linux, one that is gradually redefining how graphical environments are built and used. A Turning Point for Archinstall Archinstall, the official guided installer for Arch Linux, has steadily improved over time to make installation more accessible while still maintaining Arch’s minimalist philosophy.
With version 4.2, the installer now aligns more closely with modern desktop trends by emphasizing Wayland-based environments during setup, instead of offering traditional X.Org configurations as first-class options.
This doesn’t mean X.Org is completely gone from Arch Linux, but it does signal a clear shift in direction. Why Wayland Is Taking Over Wayland has been gaining traction for years as the successor to X.Org, offering a more streamlined and secure approach to rendering graphics on Linux.
Compared to X.Org, Wayland is designed to: Reduce complexity in the graphics stack Improve security by isolating applications Deliver smoother rendering and better performance Support modern display technologies like high-DPI and variable refresh rates As the Linux ecosystem evolves, many distributions and desktop environments are prioritizing Wayland as the default display protocol. What Changed in Archinstall 4.2 With this release, users installing Arch through Archinstall will notice: Wayland-based desktop environments and compositors are now the primary options X.Org-centric setups are no longer emphasized in guided profiles Installation workflows better reflect modern Linux defaults This simplifies the installation experience for new users, who no longer need to choose between legacy and modern display systems during setup. What About X.Org? While Archinstall is moving forward, X.Org itself is not disappearing overnight.
Many applications and workflows still rely on X11, and compatibility is maintained through XWayland, which allows X11 applications to run within Wayland sessions.
For advanced users, Arch still provides full flexibility: Go to Full Article
- OpenClaw in 2026: What It Is, Who’s Using It, and Whether Your Business Should Adopt It
by George Whittaker “probably the single most important release of software, probably ever.”
— Jensen Huang, CEO of NVIDIA
Wow! That’s a bold statement from one of the most influential figures in modern computing.
But is it true? Some people think so. Others think it’s hype. Most are somewhere in between, aware of OpenClaw, but not entirely sure what to make of it. Are people actually using it? Yes. Who’s using it? More than you might expect. Is it experimental, or is it already changing how work gets done? That depends on how it’s being applied. Is it more relevant for businesses or consumers right now? That’s one of the most important, and most misunderstood, questions.
This article breaks that down clearly: what OpenClaw is, how it works, who is using it today, and where it actually creates value.
What makes OpenClaw different isn’t just the technology, it’s where it fits. Most of the AI tools people are familiar with still require a human to take the next step. They assist, but they don’t execute. OpenClaw changes that dynamic by connecting decision-making directly to action. Once you understand that shift, the rest of the discussion, who’s using it, how it’s being deployed, and where it creates value, starts to make a lot more sense.
Top 10 Questions About OpenClaw What is OpenClaw?
OpenClaw is an open-source AI agent framework that enables large language models like Claude, GPT, and Gemini to execute real-world tasks across software systems, including APIs, files, and workflows.
What does OpenClaw actually do?
OpenClaw functions as an execution layer that allows AI systems to take actions, such as sending emails, updating CRM records, or running scripts, instead of only generating responses.
Do you need to be a developer to use OpenClaw?
No, but technical familiarity helps. Non-developers can use prebuilt workflows, while developers can customize and scale implementations more effectively.
Is OpenClaw more suited for business or consumer use?
OpenClaw is currently more suited for business and technical use cases where structured workflows exist. Consumer use is emerging but remains secondary.
How is OpenClaw different from ChatGPT or Claude?
ChatGPT and Claude generate outputs, while OpenClaw enables those outputs to trigger actions across connected systems.
Who created OpenClaw? Go to Full Article
- Linux Kernel Developers Adopt New Fuzzing Tools
by George Whittaker The Linux kernel development community is stepping up its security game once again. Developers, led by key maintainers like Greg Kroah-Hartman, are actively adopting new fuzzing tools to uncover bugs earlier and improve overall kernel reliability.
This move reflects a broader shift toward automated testing and AI-assisted development, as the kernel continues to grow in complexity and scale. What Is Fuzzing and Why It Matters Fuzzing is a software testing technique that feeds random or unexpected inputs into a program to trigger crashes or uncover vulnerabilities.
In the Linux kernel, fuzzing has become one of the most effective ways to detect: Memory corruption bugs Race conditions Privilege escalation flaws Edge-case failures in subsystems Modern fuzzers like Syzkaller have already discovered thousands of kernel bugs over the years, making them a cornerstone of Linux security testing. New Tools Enter the Scene Recently, kernel maintainers have begun experimenting with new fuzzing frameworks and tooling, including a project internally referred to as “clanker”, which has already been used to identify multiple issues across different kernel subsystems.
Early testing has uncovered bugs in areas such as: SMB/KSMBD networking code USB and HID subsystems Filesystems like F2FS Wireless and device drivers The speed at which these issues were discovered suggests that these new tools are significantly improving bug detection efficiency. AI and Smarter Fuzzing Techniques One of the most interesting developments is the growing role of AI and machine learning in fuzzing.
New research projects like KernelGPT use large language models to: Automatically generate system call sequences Improve test coverage Discover previously hidden execution paths These techniques can enhance traditional fuzzers by making them smarter about how they explore the kernel’s behavior.
Other advancements include: Better crash analysis and deduplication tools (like ECHO) Configuration-aware fuzzing to explore deeper kernel states Feedback-driven fuzzing loops for improved coverage Together, these innovations help developers focus on the most meaningful bugs rather than sifting through duplicate reports. Why This Shift Is Happening Now The Linux kernel is one of the most complex software projects in existence. With millions of lines of code and contributions from thousands of developers, manually catching every bug is nearly impossible. Go to Full Article
- GNOME 50 Reaches Arch Linux: A Leaner, Wayland-Only Future Arrives
by George Whittaker Arch Linux users are among the first to experience the latest GNOME desktop, as GNOME 50 has begun rolling out through Arch’s repositories. Thanks to Arch’s rolling-release model, new upstream software like GNOME arrives quickly, giving users early access to the newest features and architectural changes.
With GNOME 50, that includes one of the most significant shifts in the desktop’s history. A Major GNOME Milestone GNOME 50, officially released in March 2026 under the codename “Tokyo,” represents six months of development and refinement from the GNOME community.
Unlike some previous versions, this release focuses less on dramatic redesigns and more on strengthening the foundation of the desktop, improving performance, modernizing graphics handling, and simplifying long-standing complexities.
For Arch Linux users, that translates into a more streamlined and future-ready desktop environment. Goodbye X11, Hello Wayland-Only Desktop The headline change in GNOME 50 is the complete removal of X11 support from GNOME Shell and its window manager, Mutter.
After years of gradual transition: X11 sessions were first deprecated Then disabled by default And now fully removed in GNOME 50 This means GNOME now runs exclusively on Wayland, with legacy X11 applications handled through XWayland compatibility layers.
The result is a simpler, more modern graphics stack that reduces maintenance overhead and improves long-term performance and security. Improved Graphics and Display Handling GNOME 50 brings several key improvements to display and graphics performance: Variable Refresh Rate (VRR) enabled by default Better fractional scaling support Improved compatibility with NVIDIA drivers Enhanced HDR and color management These changes aim to deliver smoother animations, more responsive desktops, and better support for modern displays.
For gamers and users with high-refresh monitors, these upgrades are especially noticeable. Performance and Responsiveness Gains Beyond graphics, GNOME 50 includes multiple performance optimizations: Faster file handling in the Files (Nautilus) app Improved thumbnail generation Reduced stuttering in animations Better resource usage across the desktop These refinements make the desktop feel more responsive, particularly on systems with demanding workloads or multiple monitors. New Parental Controls and Accessibility Features GNOME 50 also expands its focus on usability and accessibility. Go to Full Article
|