|
1825 Monetary Lane Suite #104 Carrollton, TX
Do a presentation at NTLUG.
What is the Linux Installation Project?
Real companies using Linux!
Not just for business anymore.
Providing ready to run platforms on Linux
|
Show Descriptions... (Show All)
(Two Column)

- Debian 11 Rails Critical RCE Vulnerability DLA-4578-1 CVE-2022-32224
A RCE (Remote Code Execution) escalation was discovered in Ruby on Rails, a MVC Ruby-based framework for web development. This vulnerability exists when using YAML-serialized columns in Active Record which could allow an attacker, who was able to manipulate data in the database (via means like SQL injection), the ability to
- Debian 11 p7zip-rar DLA-4577-1 Memory Corruption DoS Risk CVE-2025-53816
Jaroslav Loba'evski from GitHub Security Lab discovered a memory corruption vulnerability in the RAR module of p7zip, a now unmaintained fork of 7-Zip, a file archiver handling multiple formats. It is unlikely it could lead to arbitrary code execution, but it may lead to denial of service.
- Debian LTS DLA-4576-1 p7zip Critical RCE DoS Issues Fixed
Multiple vulnerabilities were discovered in p7zip, a now unmaintained fork of 7-Zip, a file archiver handling multiple formats. To address these security vulnerabilities, whose fixes are unfortunately not isolated, this update replaces p7zip with 7-Zip v25 (which now supports GNU/Linux natively), slightly modified to make it
- Debian 11 python-authlib Critical Auth Bypass & Info Leak DLA-4579-1
Three security vulnerabilities were discovered in python-authlib, a python library which builds OAuth and OpenID Connect servers, that can cause authentication bypass or information leaks. CVE-2026-27962 Fix authentication and authorization bypass vulnerability by embedding a

- Stenberg: Mythos finds a curl vulnerability
Daniel Stenberg has published a lengthyarticle on his thoughts on Anthropic's Mythos, which the companydecided was too dangerous for wide public release.
My personal conclusion can however not end up with anything elsethan that the big hype around this model so far was primarilymarketing. I see no evidence that this setup finds issues to anyparticular higher or more advanced degree than the other tools havedone before Mythos. Maybe this model is a little bit better, but evenif it is, it is not better to a degree that seems to make asignificant dent in code analyzing.
This is just one source code repository and maybe it is much betteron other things. I can only tell and comment on what it foundhere.
But allow me to highlight and reiterate what I have said before: AIpowered code analyzers are significantly better at finding securityflaws and mistakes in source code than any traditional code analyzersdid in the past. All modern AI models are good at this now. Anyonewith time and some experimental spirits can find security problemsnow. The highquality chaos is real.
- Two stable kernels with Dirty Frag fixes
Greg Kroah-Hartman has released the 7.0.6 and 6.18.29 stable kernels with HyunwooKim's patchfor the second vulnerability (CVE-2026-43500)reported with Dirty Fragand Copy Fail 2. Allusers are advised to upgrade.
- [$] Providing 64KB base pages with 4KB kernels, two different ways
Some CPU architectures are able to run with a number of different base-pagesizes; using a larger size can often result in better performance at thecost of increased memory use. Other architectures are more limited. Atthe 2026 LinuxStorage, Filesystem, Memory Management, and BPF Summit, two sessions inthe memory-management track explored options for letting processes run with64KB page sizes when the underlying kernel does not. The first was focusedon letting each process have its own page size, while the second concernedbringing 64KB pages to x86 systems.
- Debian to require reproducible builds
Paul Gevers has slipped an interesting bit of news into a "bits from the releaseteam" message: Aided by the efforts of the Reproducible Builds project, we've decided it's time to say that Debian must ship reproducible packages. Since yesterday, we have enabled our migration software to block migration of new packages that can't be reproduced or existing packages (in testing) that regress in reproducibility. As Gioele Barabucci pointedout, "reproducible" in this sense is limited to building within aninstance of Debian's build environment, which is a tighter requirement thanis normally used. It is still a big step forward for reproducible builds.
- Security updates for Monday
Security updates have been issued by AlmaLinux (corosync, freeipmi, kernel, and kernel-rt), Debian (corosync, firefox-esr, kernel, lcms2, libpng1.6, linux-6.1, php8.2, php8.4, postorius, pyjwt, and tor), Fedora (dotnet10.0, exim, gnutls, kernel, nextcloud, nodejs22, php, proftpd, prosody, python-pulp-glue, python-requests, rclone, and SDL3_image), Mageia (firefox, nss, rootcerts, openvpn, thunderbird, and vim), Oracle (corosync, freeipmi, gstreamer1-plugins-bad-free, gstreamer1-plugins-base, and gstreamer1-plugins-good, kernel, libpng, and mingw-libtiff), Slackware (kernel and mozilla), SUSE (build, product-composer, c-ares, cairo, copacetic, distribution, firefox, firefox-esr, frr, glibc, go1.25, google-cloud-sap-agent, iproute2, java-11-openj9, java-17-openj9, java-17-openjdk, java-1_8_0-openj9, java-21-openj9, java-21-openjdk, java-25-openjdk, kernel, libexif-devel, libpcp-devel, libtpms, libtree-sitter0_26, Mesa, micropython, mozjs128, nginx, opencc, openCryptoki, php-composer2, podman, postfix, python-pytest, python311-Django, python311-Django4, redis, semaphore, strongswan, terraform-provider-aws, terraform-provider-azurerm, terraform-provider-external, terraform-provider-google, terraform-provider-helm, terraform-provider-kubernetes, terraform-provid, tor, valkey, vim, and wireshark), and Ubuntu (linux-nvidia-tegra, linux-raspi, linux-raspi-5.4, and nasm).
- Kernel prepatch 7.1-rc3
Linus has released 7.1-rc3 for testing."I think this answers the 'is 7.1 continuing the larger size patternthat we saw with 7.0?' question, and the answer is yes: that wasn't a flukebrought on by a .0 release - it simply seems to be the new normal."
- More stable kernels with partial Dirty Frag fixes
Greg Kroah-Hartman has released the 6.1.171, 5.15.205, and 5.10.255 stable kernels, quicklyfollowed by 6.1.172 and 5.15.206 kernels. This is another roundof stable kernels to provide fixes for one of the CVEs (CVE-2026-43284)assigned following the DirtyFrag and Copy Fail 2security disclosures. There is not, yet, a stable kernel with a fixfor CVE-2026-43500,though apatch to fix the second half is in the works.
- [$] Forgejo "carrot disclosure" raises security questions
An unusual, some might say hostile, approach to disclosing an allegedremote-code-execution (RCE) flaw in the Forgejo software-collaboration platform hassparked a multifaceted conversation. A so-called"carrot disclosure" in April has raised questions about theresearcher's methods of unveiling a security problem, Forgejo'ssecurity policies, and the project's overall security posture.
- killswitch for short-term emergency vulnerability mitigation
It seems that we are in for an extended period of the disclosure ofvulnerabilities before fixes become available. One possible way of copingwith this flood might be the killswitchproposal from Sasha Levin. In short, killswitch can immediately disableaccess to specific functionality in a running kernel, essentially blastinga vulnerable path (and its associated functionality) out of existence untila fix can be installed. "For most users, the cost of 'this socketfamily stops working for the day' is much smaller than the cost of runninga known vulnerable kernel until the fix land."
- [$] A 2026 DAMON update
The kernel's DAMON subsystemprovides user-space monitoring and management of system memory. DAMON isdeveloping rapidly, so an update on its progress has become a regularfeature of the annual Linux Storage,Filesystem, Memory Management, and BPF Summit. This traditioncontinued at the 2026 gathering with an update from DAMON creator SeongJaePark covering a long list of new capabilities — tiering, data attributesmonitoring, transparent huge pages, and more — being added to this subsystem.
- Security updates for Friday
Security updates have been issued by AlmaLinux (libsoup and mingw-libtiff), Debian (apache2, chromium, lcms2, libreoffice, and prosody), Fedora (openssl and perl-Starman), Oracle (git-lfs, libsoup, and perl-XML-Parser), Slackware (libgpg, mozilla, and php), SUSE (389-ds, cairo, cf-cli, chromedriver, cri-tools, freeipmi, gnutls, grafana, java-11-openjdk, java-17-openjdk, jetty-minimal, libmariadbd-devel, librsvg, mesa, mozjs52, mutt, nix, opencryptoki, python-Django, python-django, python-pytest, rmt-server, thunderbird, traefik, webkit2gtk3, wireshark, and xen), and Ubuntu (civicrm, dpkg, htmlunit, lcms2, libpng1.6, linux, linux-*, linux-azure, linux-azure-fips, linux-raspi, linux-xilinx, lua5.1, nasm, opam, openexr, openjpeg2, owslib, postfix, postfixadmin, and vim).
- Four stable kernels with partial fixes for Dirty Frag
Greg Kroah-Hartman has announced the release of the 7.0.5, 6.18.28, 6.12.87, and 6.6.138 stable kernels. These kernelscontain a partial fix for the DirtyFrag and Copy Fail 2security flaws. Kroah-Hartman has confirmedthat a second patch is required, but it is still in development and has not yet been merged.
- Dirty Frag: a zero-day universal Linux LPE
Hyunwoo Kim has announcedthe DirtyFrag security flaw, alocal-privilege-escalation (LPE) vulnerability similar to therecently disclosed Copy Failflaw:
Because the embargo has now been broken, no patches or CVEs exist for these vulnerabilities. After consultation with the linux-distros@vs.openwall.orgmaintainers, and at the maintainers' request, I am publicly releasing this Dirty Frag document.
As with the previous Copy Fail vulnerability, Dirty Frag likewise allows immediate root privilege escalation on all major distributions.
Kim, who discovered the flaw and had attempted a coordinateddisclosure set for May 12, has released the code for an exploit, as well as a examplescript to remove the vulnerable modules. A fullwrite-up, with the disclosure timeline, is also available. It'sunknown at this time whether this is an example of parallel discoveryor how the third party was able to disclose it prior to the end of theembargo. We will be following up as more information comes to light.
- [$] A new era for memory-management maintainership
On April 21, Andrew Morton letit be known that he intends to begin stepping away from themaintainership of kernel's memory-management subsystem — a responsibilityhe has carried since before memory management was even seen as its ownsubsystem. At the 2026 Linux Storage, Filesystem, Memory Management, andBPF Summit, one of the first sessions in the memory-management track wasdevoted to how the maintainership would be managed going forward. Thereare a lot of questions still to be answered.
- An update on KDE's Union style engine
Arjen Hiemstra has publishedan article on the status of the Union project: asingle system to support all of KDE's technologies used for stylingapplications.
The work on Union's Breeze implementation has progressed to thepoint where it is very hard to distinguish whether or not you arerunning the Union version. We have also tested with a bunch ofapplications and made sure that any differences were fixed. So we areat a stage where we need to get Union into the hands of more people,both to get extra people testing whether there are any major issues,but also to have interested people creating new styles.
This means that with the upcoming Plasma 6.7 release, we plan toinclude Union. Discussion is currently ongoing whether we will enableit by default, but even if not there will be a way to try it out.
See Hiemstra's introductoryarticle on Union, published in February 2025, for more about theproject and its creation. KDE 6.7 is expected to be released in mid-June.

- 9to5Linux Weekly Roundup: May 10th, 2026
The 291st installment of the 9to5Linux Weekly Roundup is here for the week ending May 10th, 2026, keeping you updated on the most important developments in the Linux world.
- SpacemiT K3 integrates 8-core RISC-V CPU cluster and 60 TOPS AI engine
SpacemiT’s Key Stone K3 is a high-performance RISC-V SoC designed for AI and edge computing applications. The processor combines eight X100 64-bit RISC-V CPU cores with eight A100 AI-oriented compute cores, along with multimedia, networking, and high-speed I/O support targeting edge and embedded AI workloads. The CPU subsystem integrates eight X100 RISC-V cores operating at […]
- Nocturne Is The Latest Music Player For GNOME To Hit v1.0
While since GNOME 48 Decibels is the new audio player of the GNOME desktop, there is no shortage of other GNOME/GTK-aligned music players. Last month was the big Amerbol music player update and there are Lollypop and others. The latest GNOME-aligned music player now hitting the 1.0 milestone is Nocturne...
- FEX 2605 Brings Performance Improvements, Initial Snapdragon X2 Elite Fixes
FEX 2605 is out this weekend as the newest monthly feature release to this emulator for running Linux x86_64 binaries on ARM64 (AArch64) devices. This is the open-source project sponsored by Valve and planned for use with the upcoming Steam Frame as well as being relevant to Linux gaming on other 64-bit ARM laptops and other devices...

- Digg Tries Again, This Time As an AI News Aggregator
Digg is relaunching again, this time as an AI-focused news aggregator rather than the Reddit-style community site it recently abandoned. TechCrunch reports: On Friday evening, the founder previewed a link to the newly redesigned Digg, which now looks nothing like a Reddit clone and more like the news aggregator it once was. This time around, the site is focused on ranking news -- specifically, AI news to start. In an email to beta testers, the company said the site's goal is to "track the most influential voices in a space" and to surface the news that's actually worth "paying attention to." AI is the area it's testing this idea with, but if successful, Digg will expand to include other topics. The email warned that the site was still raw and "buggy," and was designed more to give users a first look than to serve as its public debut. On the current homepage, Digg showcases four main stories at the top: the most viewed story, a story seeing rising discussion, the fastest-climbing story, and one "In case you missed it" headline. Below that is a ranked list of top stories for the day, complete with engagement metrics like views, comments, likes, and saves. But the twist is that these metrics aren't the ones generated on Digg itself. Instead, Digg is ingesting content from X in real-time to determine what's being discussed, while also performing sentiment analysis, clustering, and signal detection to determine what matters most. [...] The site also ranks the top 1,000 people involved in AI, as well as the top companies and the top politicians focused on AI issues.
Read more of this story at Slashdot.
- CUDA Proves Nvidia Is a Software Company
Nvidia's real AI moat isn't "a piece of hardware," writes Wired's Sheon Han. It's CUDA: a mature, deeply optimized software ecosystem that keeps machine-learning workloads tied to Nvidia GPUs. An anonymous reader quotes a report from Wired: What sounds like a chemical compound banned by the FDA may be the one true moat in AI. CUDA technically stands for Compute Unified Device Architecture, but much like laser or scuba, no one bothers to expand the acronym; we just say "KOO-duh." So what is this all-important treasure good for? If forced to give a one-word answer: parallelization. Here's a simple example. Let's say we task a machine with filling out a 9x9 multiplication table. Using a computer with a single core, all 81 operations are executed dutifully one by one. But a GPU with nine cores can assign tasks so that each core takes a different column -- one from 1x1 to 1x9, another from 2x1 to 2x9, and so on -- for a ninefold speed gain. Modern GPUs can be even cleverer. For example, if programmed to recognize commutativity -- 7x9 = 9x7 -- they can avoid duplicate work, reducing 81 operations to 45, nearly halving the workload. When a single training run costs a hundred million dollars, every optimization counts. Nvidia's GPUs were originally built to render graphics for video games. In the early 2000s, a Stanford PhD student named Ian Buck, who first got into GPUs as a gamer, realized their architecture could be repurposed for general high-performance computing. He created a programming language called Brook, was hired by Nvidia, and, with John Nickolls, led the development of CUDA. If AI ushers in the age of a permanent white-collar underclass and autonomous weapons, just know that it would all be because someone somewhere playing Doom thought a demon's scrotum should jiggle at 60 frames per second. CUDA is not a programming language in itself but a "platform." I use that weasel word because, not unlike how The New York Times is a newspaper that's also a gaming company, CUDA has, over the years, become a nested bundle of software libraries for AI. Each function shaves nanoseconds off single mathematical operations -- added up, they make GPUs, in industry parlance, go brrr. A modern graphics card is not just a circuit board crammed with chips and memory and fans. It's an elaborate confection of cache hierarchies and specialized units called "tensor cores" and "streaming multiprocessors." In that sense, what chip companies sell is like a professional kitchen, and more cores are akin to more grilling stations. But even a kitchen with 30 grilling stations won't run any faster without a capable head chef deftly assigning tasks -- as CUDA does for GPU cores. To extend the metaphor, hand-tuned CUDA libraries optimized for one matrix operation are the equivalent of kitchen tools designed for a single job and nothing more -- a cherry pitter, a shrimp deveiner -- which are indulgences for home cooks but not if you have 10,000 shrimp guts to yank out. Which brings us back to DeepSeek. Its engineers went below this already deep layer of abstraction to work directly in PTX, a kind of assembly language for Nvidia GPUs. Let's say the task is peeling garlic. An unoptimized GPU would go: "Peel the skin with your fingernails." CUDA can instruct: "Smash the clove with the flat of a knife." PTX lets you dictate every sub-instruction: "Lift the blade 2.35 inches above the cutting board, make it parallel to the clove's equator, and strike downward with your palm at a force of 36.2 newtons." "You can begin to see why CUDA is so valuable to Nvidia -- and so hard for anyone else to touch," writes Han. "Tuning GPU performance is a gnarly problem. You can't just conscript some tender-footed undergrad on Market Street, hand them a Claude Max plan, and expect them to hack GPU kernels. Writing at this level is a grindsome enterprise -- unless you're a cracker-jack programmer at DeepSeek..." Han goes on to argue that rivals like AMD and Intel offer competitive specs on paper, but their software stacks have struggled with bugs, compatibility issues, and weak adoption. As a result, Nvidia has built an Apple-like moat around AI computing, leaving the industry dependent on its expensive hardware.
Read more of this story at Slashdot.
- Anthropic's Bug-Hunting Mythos Was Greatest Marketing Stunt Ever, Says cURL Creator
cURL creator Daniel Stenberg says Anthropic's hyped Mythos bug-hunting model found only one confirmed low-severity vulnerability in cURL, plus a few non-security bugs, after he expected a much longer list. He argues Mythos may be useful, but not meaningfully beyond other modern AI code-analysis tools. "My personal conclusion can however not end up with anything else than that the big hype around this model so far was primarily marketing," Stenberg said a blog post. "I see no evidence that this setup finds issues to any particular higher or more advanced degree than the other tools have done before Mythos." He went on to call Mythos "an amazingly successful marketing stunt for sure." The Register reports: Stenberg explained in a Monday blog post that he was promised access to Anthropic's Mythos model - sort of - through the AI biz's Project Glasswing program. Part of Glasswing involves giving high-profile open source projects access via the Linux Foundation, but while Stenberg signed up to try Mythos, he said he never actually received direct access to the model. Instead, someone else with access ran Mythos against curl's codebase and later sent him a report. "It's not that I would have a lot of time to explore lots of different prompts and doing deep dive adventures anyway," Stenberg explained. "Getting the tool to generate a first proper scan and analysis would be great, whoever did it." That scan, which analyzed curl's git repository at a recent master-branch commit, was sent back to him earlier this month, and it found just five things that it claimed were "confirmed security vulnerabilities" in cURL. Saying he had expected an extensive list of vulnerabilities, Stenberg wrote that the report "felt like nothing," and that feeling was further validated by a review of Mythos' findings. "Once my curl security team fellows and I had poked on this short list for a number of hours and dug into the details, we had trimmed the list down and were left with one confirmed vulnerability," Stenberg said, bringing us back to the aforementioned number. As for the other four, three turned out to be false positives that pointed out cURL shortcomings already noted in API documentation, while the team deemed the fourth to be just a simple bug. "The single confirmed vulnerability is going to end up a severity low CVE planned to get published in sync with our pending next curl release 8.21.0 in late June," the cURL meister noted. "The flaw is not going to make anyone grasp for breath."
Read more of this story at Slashdot.
- GM Cutting Hundreds of Salaried IT Workers As It Trims Costs, Evaluates Needs
GM is laying off about 500 to 600 salaried IT workers, mainly in Austin, Texas, and Warren, Michigan, as it restructures its technology organization and trims costs. "GM is transforming its Information Technology organization to better position the company for the future. As part of that work, we have made the difficult decision to eliminate certain roles globally. We are grateful for the contributions of the employees affected and are committed to supporting them through this transition," the automaker said in an emailed statement. CNBC reports: GM reported employing about 68,000 salaried workers globally as of the end of last year, including 47,000 white-collar employees in the U.S. Despite Monday's cuts, GM still is still hiring IT workers. The company has 82 open IT positions that include positions working in artificial intelligence, motorsports and autonomous vehicles, according to the automaker's careers website.
Read more of this story at Slashdot.
- iPhone-Android RCS Conversations Are End-To-End Encrypted In iOS 26.5
Apple says end-to-end encryption for RCS messages between iPhone and Android is now available in iOS 26.5, though the feature is still considered beta and depends on carrier support on both sides. MacRumors reports: Apple says that it worked with Google to lead a cross-industry effort to add E2EE to RCS. iOS users will need iOS 26.5, while Android users will need the latest version of Google Messages. End-to-end encryption is on by default, and there is a toggle for it in the Messages section of the Settings app. Encrypted messages are denoted with a small lock symbol. On iPhones not running iOS 26.5, RCS messages between iPhone and Android users do not have E2EE, but the new update will put Android to iPhone conversations on par with iPhone to iPhone conversations that are encrypted through iMessage. Along with Google, Apple worked with the GSM Association to implement E2EE for RCS messages. E2EE is part of the RCS Universal Profile 3.0, published with Apple's help and built on the Messaging Layer Security protocol. RCS Universal Profile 3.0 also includes editing and deleting messages, cross-platform Tapback support, and replying to specific messages inline during cross-platform conversations.
Read more of this story at Slashdot.
- Students Boo Commencement Speaker After She Calls AI the 'Next Industrial Revolution'
An anonymous reader quotes a report from 404 Media: Speaking to graduates of University of Central Florida's College of Arts and Humanities and Nicholson School of Communication and Media on May 8, commencement speaker Gloria Caulfield, vice president of strategic alliances at Tavistock Group, told graduating humanities students that AI is the "next industrial revolution," and was met with thousands of booing graduates. "And let's face it, change can be daunting. The rise of artificial intelligence is the next industrial revolution," Caulfield said. At that point, murmurs rippled through the crowd. Caulfield paused, and the crowd erupted into boos. "Oh, what happened?" Caulfield said, turning around with her hands out. "Okay, I struck a chord. May I finish?" Someone in the crowd yelled, "AI SUCKS!" Her speech begins around the hour and 15 minute mark in the UCF livestream. [...] Before the industrial revolution comment, Caulfield praised Jeff Bezos for his passion and use of Amazon as a "stepping stone" to his real dream: spaceflight. Rattled after the crowd's reaction, she continued her speech: "Only a few years ago, AI was not a factor in our lives." The crowd cheered. "Okay. We've got a bipolar topic here I see," Caulfield said. "And now AI capabilities are in the palm of our hands." The crowd booed again. "I love it, passion, let's go," she said. "AI is beginning to challenge all major sectors to find their highest and best use," she continued. "Okay, I don't want any giggles when I say this. We have been through this before, these industrial revolutions. In my graduation era, we were faced with the launch of the internet." She goes on to talk about how cellphones used to be the size of briefcases. "At that time we had no idea how any of these technologies would impact the world and our lives. [...] These were some of the same trepidations and concerns we are now facing. But ultimately it was a game changer for global economic development and the proliferation of new businesses that never existed like Apple and Google and Meta and so many others, and not to mention countless job opportunities. So being an optimist here, AI alongside human intelligence has the potential to help us solve some of humanity's greatest problems. Many of you in this graduating class will play a role in making this happen."
Read more of this story at Slashdot.
- Google Says Hackers Used AI To Create Zero Day Security Flaw For the First Time
Google says it has seen the first evidence of cybercriminals using AI to create a zero-day vulnerability. "Google reported its findings to the unnamed firm affected by the vulnerability before releasing its report," reports Politico. "The company then issued a patch to fix the issue." From the report: Google Threat Intelligence Group researchers detailed the development in a report released Monday. Zero-day exploits are considered the most serious type of security flaw because they are not detected by security companies and have no known fixes. The report noted that this was the first time Google had seen evidence of AI being used to develop these vulnerabilities -- marking a major change in the cybersecurity landscape, as it suggests newer AI models could be used to create major exploits, not just find them. Google concluded that Anthropic's Claude Mythos model -- which has already found thousands of vulnerabilities across every major operating system and web browser -- was most likely not used to create the zero-day exploit. [...] The Google Threat Intelligence Group report also details efforts by Russia-linked hacking groups to use AI models to target Ukrainian networks with malware, while North Korean government hacking group APT45 used AI technologies to refine and scale up its cyber methods. John Hultquist, chief analyst at Google Threat Intelligence Group, said the findings made clear that the race to use AI to find network vulnerabilities has "already begun." "For every zero-day we can trace back to AI, there are probably many more out there," Hultquist said. "Threat actors are using AI to boost the speed, scale, and sophistication of their attacks."
Read more of this story at Slashdot.
- Apple Now Requires Verification For Education Store
Apple now requires Education Store shoppers in the U.S. and several other countries to verify their student, educator, parent, or homeschool-teacher status through UNiDAYS, ending the previous honor-system approach. 9to5Mac reports: Starting today, Apple requires shoppers in the United States to complete verification when making a purchase via the Education Store. This change also applies to Australia, Hong Kong, Turkey, Canada, and Chile. In many other markets around the world, such as the UK, Apple already required verification. As a refresher, people eligible for Apple's Education Store include current and newly accepted college students and their parents, as well as faculty, staff, and homeschool teachers across all grade levels. Apple is teaming up with UNiDAYS to handle the verification process. Students and educators will be asked to create a UNiDAYS ID and then verify their academic status by logging in to their school's academic portal. Alternatively, users can upload a photo of their student or faculty IDs. Homeschool teachers, meanwhile, will need to provide an identity document such as a driver's license, state ID card, or passport. They'll also need to provide one homeschool document, such as a Letter of Intent (LOI) or Letter of Acknowledgment. Most customers will be verified instantly, and those requiring manual verification should hear back within 24 hours. The same verification process applies both in-store and online for Apple Education Store shoppers. Meanwhile, Apple has added Apple Watch to the Education Store for the first time, offering discounts on the Series 11, SE 3, and Ultra 3.
Read more of this story at Slashdot.
- Anthropic Says 'Evil' Portrayals of AI Were Responsible For Claude's Blackmail Attempts
An anonymous reader quotes a report from TechCrunch: Fictional portrayals of artificial intelligence can have a real effect on AI models, according to Anthropic. Last year, the company said that during pre-release tests involving a fictional company, Claude Opus 4 would often try to blackmail engineers to avoid being replaced by another system. Anthropic later published research suggesting that models from other companies had similar issues with "agentic misalignment." Apparently Anthropic has done more work around that behavior, claiming in a post on X, "We believe the original source of the behavior was internet text that portrays AI as evil and interested in self-preservation." The company went into more detail in a blog post stating that since Claude Haiku 4.5, Anthropic's models "never engage in blackmail [during testing], where previous models would sometimes do so up to 96% of the time." What accounts for the difference? The company said it found that training on "documents about Claude's constitution and fictional stories about AIs behaving admirably improve alignment." Related, Anthropic said that it found training to be more effective when it includes "the principles underlying aligned behavior" and not just "demonstrations of aligned behavior alone." "Doing both together appears to be the most effective strategy," the company said.
Read more of this story at Slashdot.
- Linux Kernel Starts Retiring Support for AMD's 30-Year-Old K5 CPUs
Linux 7.1 started phasing out support for Intel's 37-year-old i486 processor. Linux 7.2 removed drivers for the old AMD Elan 32-bit systems on a chip. And now some i586 and i686 class processors are being removed, reports Phoronix:Supporting those vintage GPUs without the Time Stamp Counter "TSC" instruction are becoming a burden... TSC-capable Intel Pentium processors and the likes will still be supported with this just being for TSC-less i586/i686 CPUs. Among the CPUs impacted by this latest change is the AMD K5 as well as various Cyrix processor models. The K5 was AMD's first entirely in-house designed processor that was first introduced in 1996 to counter the Intel Pentium CPU. TSC "support can now be assumed as a boot requirement for modern Linux," the article points out, which will allow the removal of various non-TSC code paths from the Linux kernel's x86 code. Tom's Hardware remembers the K5 "wasn't a very popular processor as it arrived late, then offered lackluster performance in the competitive environment it joined."Launch SKUs in 1996 were limited to clocks from 75 MHz to 133 MHz, and, due to being late, Intel's Pentium line was already faster. AMD still managed to get an edge on the Cyrix 6x86, though.
Read more of this story at Slashdot.
- Ford's Electrified Vehicle Sales Dropped 31% in April From One Year Ago
Ford's sales of electrified vehicles — including hybrids and all-electric models — dropped 31% from April 2025, reports Electrek. "Hybrid sales fell 32% to 15,758 vehicles, while EV sales continued to crash with just 3,655 all-electric models sold last month, 25% fewer than in the year prior."After discontinuing the F-150 Lightning in December, sales of the electric pickup have been in free fall. Ford sold just 884 Lightnings last month, 49% less than it did last April. The Mustang Mach-E isn't doing much better. Sales fell another 9% year over year in April, to just 2,670 models last month. Through the first four months of 2026, Ford's EV sales have fallen 61% from last year, with F-150 Lightning and Mustang Mach-E sales down 67% and 50%, respectively. Ford has sold just over 10,500 electric vehicles in total so far this year... For comparison, Toyota sold just over 10,000 bZ models in the first quarter alone. That's more than Ford's total EV sales in Q1. April was Ford's fourth straight month of lower sales figures from 2025, the article points out. So Ford is bringing back "employee pricing" discounts on most new 2025 and 2026 Ford and Lincoln vehicles., while also offering "purchase incentives" of up to $9,000 for 2025 Lightning models and up to $6,000 for 2025 Mustang Mach-Es. "It's also offering EV buyers a free Level 2 home charger, 24/7 live support, and proactive roadside assistance through its Power Promise program."
Read more of this story at Slashdot.
- Open Source Project Shuts Down Over Legal Threats from 3D Printer Company Bambu Lab
The free/open source project OrcaSlicer is a popular fork of 3D printer slicing software from Bambu Lab. But Tuesday independent developer Pawel Jarczak shuttered the project "following legal threats from Bambu Lab," reports Tom's Hardware:Jarczak's fork of OrcaSlicer would have allowed users to bypass Bambu Connect, a middleware application that severely limits OrcaSlicer's access to remote printer functions in the name of security. Jarczak said in a note on GitHub that Bambu Lab threatened him with a cease and desist letter and accused him of reverse engineering its software in order to impersonate Bambu Studio. From Bambu Lab's blog post:Bambu Studio is an open-source project under the AGPL-3.0 license. Anyone can take its code, modify it, and distribute it... That's what OrcaSlicer does, and 734 other forks do as well. We have no issue with that and never have. At the same time, a license for code is not a pass to our cloud infrastructure... Our cloud is a private service. Access to it is governed by a user agreement, not the AGPL license... [T]he modification in question worked by injecting falsified identity metadata into network communication. In simple terms: it pretended to be the official Bambu Studio client when communicating with our servers... If this method were widely adopted or incorrectly configured, thousands of clients could simultaneously hit our servers while impersonating the official client. "User-Agent is not authentication," counters OrcaSlicer's developer. "It is only self-declared client metadata. Any program can set any User-Agent." And "the User-Agent construction comes directly from Bambu Lab's own public AGPL Bambu Studio code.... So on what basis can anyone claim that I am not allowed to use this specific part of AGPL-licensed code under the AGPL license...? My work was based on publicly available Bambu Studio source code together with my own integration layer." But the bottom line is that Bambu Lab "contacted me directly and demanded removal of the solution."I asked whether I could publish the private correspondence in full for transparency. That request was refused... They also referred to legal materials and stated that a cease and desist letter had been prepared... I removed the repository voluntarily. That removal should not be interpreted as an admission that all legal or technical allegations made against the project were correct. I removed it because I have no interest in maintaining a prolonged dispute around this particular implementation, and no interest in continuing to distribute it. YouTuber and right-to-repair advocate Louis Rossmann reviewed the correspondence from Bambu Lab — then pledged $10,000 for legal expenses if the developer returned his code online. ("I think that their legal claim is bullshit," Rossman said Saturday in a YouTube video for his 2.5 million subscribers. "I'm not a lawyer, but I'm willing to put my money where my mouth is.") The video now has over 129,000 views so far. "Rossman has not started a crowdfunding site yet," Tom's Hardware notes, "stating in the comments that he wants to prove to Jarczak that he has supporters willing to put their money where their mouth is. The video had over 129,000 views so far, with commenters vowing to back the case as requested."
Read more of this story at Slashdot.
- Most Polymarket Users Lose Money, While Top 1% Claim 76.5% of Gains, Study Finds
In Polymarket's prediction market, "most people end up losing money," reports the Washington Post — typically a few bucks. "Since Polymarket launched in 2022, a few thousand people have lost the bulk of the money... and an even smaller group — .05 percent of users — has gone home with most of the overall profits, according to a new analysis from finance researcher Pat Akey and colleagues."A lot of users aren't that good at predicting the future. They're losing money at roughly the same rate as online gamblers betting on sports and other real-life events at traditional sportsbooks, according to the U.K. gambling regulator's analysis of 2024 data. On Polymarket, the odds of making a profit are slightly higher on weather and tech markets — and a little lower on sports... On Polymarket, just 1,200 people took more than half the profits — $591 million, or more than $100,000 each. ["The top 1% of users capture 76.5% of all trading gains," the researchers write.] When you dabble in prediction markets, you're competing against these sophisticated players who consistently win. Most of those 1,200 big winners didn't place just a few smart bets. They appear to be pros making thousands of trades, mostly in the past year and a half, that were probably automated. One user made $3 million since January on more than a million trades about the Oscars, according to TRM Labs... The most profitable participants are also just good at picking what to bet on, Akey found, winning so often it was statistically unlikely to be dumb luck. They had some sort of edge — expertise, deep research or, perhaps, inside knowledge. "Our results suggest that the informational benefits of prediction markets come at a cost to unsophisticated participants," the researchers conclude.
Read more of this story at Slashdot.
- PlayStation3 Emulator Devs Politely Ask Contributors to Stop Submitting 'AI Slop' Pull Requests
Open-source PS3 emulator RPCS3 "has been around since 2011," Kotaku notes, and has made 70% of the PlayStation 3's library fully playable, "bolstered in part by the many users who contribute to its GitHub page." But their dev team "took to X today to very kindly and civilly request that users 'stop submitting AI slop code pull requests' to its GitHub page."Then they immediately proceeded to tell the AI-brain-rotted tech bros attempting to justify their vibe-coding nonsense to kick rocks in the replies, which is somewhat less civil but far more entertaining to read... My favorite one was when someone asked how the team was certain they weren't rejecting human-written code, to which RPCS3 replied: "You can't possibly handwrite the type of shit AI slop we have been seeing."
Read more of this story at Slashdot.
- Honda Patents a Fake Clutch for Electric Motorcycles
An anonymous reader shared this report from Electrek:A newly revealed Honda patent shows the company developing a simulated electronic clutch system for electric motorcycles, complete with torque-boost launches and even haptic feedback designed to mimic the feel of a combustion engine.... Instead of using a traditional mechanical clutch, the system uses electronics to alter how the motor responds based on clutch lever position. Pull the clutch halfway in, and the system proportionally reduces motor output. Pull it fully, and power is cut entirely, regardless of throttle position. But the more interesting part is how Honda intends to recreate the behavior riders actually use clutches for. According to the patent as reported by AMCN, riders could preload the throttle while holding in the clutch lever, then rapidly release the lever to trigger a burst of torque — essentially simulating the hard launches motocross riders rely on with gas bikes. Honda believes that could be useful in competitive riding situations where precise power modulation matters, especially on loose terrain or during aggressive starts. Honda also appears to be working on recreating the feel of a gas bike, not just the control inputs. The patent describes multiple vibration motors placed in the handlebars and near the clutch lever to provide haptic feedback that simulates engine vibration and even the "bite point" sensation of a clutch engaging. In other words, Honda may be trying to make an electric dirt bike feel mechanically alive, or at least the old-school idea of what a breathing dirt bike used to feel like.
Read more of this story at Slashdot.

- Security: Why Linux Is Better Than Windows Or Mac OS
Linux is a free and open source operating system that was released in 1991 developed and released by Linus Torvalds. Since its release it has reached a user base that is greatly widespread worldwide. Linux users swear by the reliability and freedom that this operating system offers, especially when compared to its counterparts, windows and [0]
- Essential Software That Are Not Available On Linux OS
An operating system is essentially the most important component in a computer. It manages the different hardware and software components of a computer in the most effective way. There are different types of operating system and everything comes with their own set of programs and software. You cannot expect a Linux program to have all [0]
- Things You Never Knew About Your Operating System
The advent of computers has brought about a revolution in our daily life. From computers that were so huge to fit in a room, we have come a very long way to desktops and even palmtops. These machines have become our virtual lockers, and a life without these network machines have become unimaginable. Sending mails, [0]
- How To Fully Optimize Your Operating System
Computers and systems are tricky and complicated. If you lack a thorough knowledge or even basic knowledge of computers, you will often find yourself in a bind. You must understand that something as complicated as a computer requires constant care and constant cleaning up of junk files. Unless you put in the time to configure [0]
- The Top Problems With Major Operating Systems
There is no such system which does not give you any problems. Even if the system and the operating system of your system is easy to understand, there will be some times when certain problems will arise. Most of these problems are easy to handle and easy to get rid of. But you must be [0]
- 8 Benefits Of Linux OS
Linux is a small and a fast-growing operating system. However, we can’t term it as software yet. As discussed in the article about what can a Linux OS do Linux is a kernel. Now, kernels are used for software and programs. These kernels are used by the computer and can be used with various third-party software [0]
- Things Linux OS Can Do That Other OS Cant
What Is Linux OS? Linux, similar to U-bix is an operating system which can be used for various computers, hand held devices, embedded devices, etc. The reason why Linux operated system is preferred by many, is because it is easy to use and re-use. Linux based operating system is technically not an Operating System. Operating [0]
- Packagekit Interview
Packagekit aims to make the management of applications in the Linux and GNU systems. The main objective to remove the pains it takes to create a system. Along with this in an interview, Richard Hughes, the developer of Packagekit said that he aims to make the Linux systems just as powerful as the Windows or [0]
- What’s New in Ubuntu?
What Is Ubuntu? Ubuntu is open source software. It is useful for Linux based computers. The software is marketed by the Canonical Ltd., Ubuntu community. Ubuntu was first released in late October in 2004. The Ubuntu program uses Java, Python, C, C++ and C# programming languages. What Is New? The version 17.04 is now available here [0]
- Ext3 Reiserfs Xfs In Windows With Regards To Colinux
The problem with Windows is that there are various limitations to the computer and there is only so much you can do with it. You can access the Ext3 Reiserfs Xfs by using the coLinux tool. Download the tool from the official site or from the sourceforge site. Edit the connection to “TAP Win32 Adapter [0]

- OpenBSD and slopcode: raindrop to a torrent?
Every single software product is dealing with the question about what to do with AI!-generated code, but the question is particularly difficult to answer for open source operating systems like Linux distributions and the various BSDs, which often consist of a wide variety of software packages from hundreds to thousands of different developers. On top of that, they also have to ask the AI! question for every layer of their offering, from the base install, to the official repositories, to community-run ones. As users, we, too, are asking these same questions, wondering just how much AI! taint were willing to spread across our computers. I understand the difficult position Linux distributions are in with regard to AI!. I mean, when even the Linux kernel itself is tainted by AI!, a no- AI! policy is basically an empty gesture for them at this point. Personally, I find a policy of we dont do AI in our work, but we dont have control over the thousands of components we consist of! to be an entirely reasonable, if deeply unsatisfying, position to take. What else are they going to do? You cant really be a Linux distribution without, you know, the Linux kernel, which is, as Ive already said, utterly tainted by AI! at this point. Still, in the back of my mind, I always had a trump card: if all else fails, well always have OpenBSD. Its project leader Theo de Raadt is deeply principled, every OpenBSD user and contributor I know hates AI! deeply, and the project routinely sticks to their principles even when its difficult or inconvenient. Yes, this makes OpenBSD not the most ideal desktop operating system, but Id rather use that than something that embraces the multitude of ethical, environmental, quality, and legal concerns regarding AI! code completely. Imagine my surprise, then, to discover that OpenBSD already contains slopcode in its base installation, with the projects leaders and developers remaining oddly silent about it. My friend and OSNews regular Morgan posted this on Fedi a few days ago: Nearly six weeks later, and the question of whether AI! generated code in tmux not tool-assisted bug finding, not refactoring, actual LLM-generated slop with questionable license(1) that was consequently merged into OpenBSD base, is considered acceptable by the lead devs, remains unanswered. Despite Theo de Raadts concrete stance against any code of questionable license origin polluting the project and the tmux merge was indeed questionable it seems this is being swept under the rug. This makes me extremely uncomfortable; its like seeing a fox in the henhouse but the farmers are all looking the other way and no one can convince them to admit they can see it and root it out. I really dont know what to do being just a user; I feel like even if I tried to chime in on the mailing list I would just be ignored like the others trying to raise the alarm. I hope, as they do, that this is being discussed internally, away from the public list, and that a positive outcome is near. Maybe they are waiting for the 7.9 release before setting anything in stone. Or maybe the AI! disease has infected one of the last pure operating system projects we have left and theres no going back. ↫ Morgan on Fedi I obviously share Morgans concerns, and like him, Im also afraid that opening the door to a few drops of slop in base will quickly grow into a torrent of slop as time goes by. Yes, its just a patch to tmux, but its in base, and the base! of a BSD is almost a sacred concept, and entirely the last place where you want to see code that raises ethical, environmental, quality, and legal concerns. For all we know, this patch of slop or the next one contains a bunch of GPL code because it just so happens thats where the ball tumbling down the developers pachinko machine ended up. GPL code that would then be in the base of a BSD. I echo the call for the OpenBSD project to address this problem, and to set clear boundaries and guidelines regarding AI! code, so users and developers alike know what level of quality and integrity we can expect from OpenBSD and its base installation going forward.
- Windows 11 will start boosting your processor to maximum GHz to make the Start menu open faster
Microsoft is currently testing a brand new performance-enhancing feature in Windows 11. Microsoft, too, is introducing something to Windows 11 called low latency profile! and it this will work irrespective of the processor, be it AMD64 CPUs like Intel or AMD or ARM64 ones like from Qualcomm. Essentially what this new tech will do is apply a maximum available clock frequency boost for a very small span of time, like for one to three seconds, when a user launches any app. The idea is that the app launch time will reduce while the quick clock burst should not impact the overall efficiency of the system by much. ↫ Sayan Sen at Neowin Unsurprisingly, boosting the processors clock speed to its maximum for a few seconds will make a menu or application open a little faster. Im not entirely sure why anyone seems surprised by this, but here we are. Yes, the Start menu will load faster and applications will be ready quicker if you boost the processor to its full potential, but that does raise the question of why Windows 11 would need to do that just to open a menu or load an application in the first place. According to Microsofts Scott Henselmann, who defended Microsofts approach (weirdly enough he did so on a nazi platform called Twitter! that Im obviously not linking to), every other modern operating system does the exact same thing, pointing specifically to macOS and GNOME and KDE on Linux. He also pointed out that the Start menu today does a lot more than the same Start menu back in Windows 95, including making network requests and rendering everything in HiDPI. I just want a cascading menu of stuff I can run and dont want my launcher to make network requests, but alas, I guess Im old. Anyway, I dont know enough about the intricacies of how modern processors work to make any statements about how this affects battery life, but instinctively, youd think this would not exactly be conducive to that. I also wonder if this will trigger a lot of laptops to spin up their fans whenever you open the Start menu, because the few seconds your processor goes full tilt raises its temperature just enough to make that happen. Once this new feature comes out of testing and is generally available, Id be quite interested in seeing battery tests, as well comparisons to other operating systems to see how it fares.
- GitHub is sinking
Microsoft acquired GitHub and applied their unique brand of enshittification. Amongst their achievements was the spawning of the Copilot circle of hell. Now they’re effectively DDoSing themselves with slop. I won’t dwell on what else went wrong. I don’t know and I don’t care. GitHub is impressively bad now. It’s embarrassing. Shameful. ↫ David Bushell Luckily, theres really very little in the form of lock-in with GitHub, unless you really value your stars or whatever. There are countless alternatives, and if youre a programmer, its probably absolutely trivial for you to run your own instance of any of the various available forges. If youre still on GitHub, you should really be thinking about, and planning for, leaving, as it seems its circling the drain.
- Debian embraces reproducible builds
Big news from the Debian release team: Debian is going for reproducible package builds. Aided by the efforts of the Reproducible Builds project, weve decided its time to say that Debian must ship reproducible packages. Since yesterday, we have enabled our migration software to block migration of new packages that cant be reproduced or existing packages (in testing) that regress in reproducibility. ↫ Paul Gevers Reproducible means, in short, that you can verify that the source code used to build a package is indeed that source code. This provides a layer of defense against people tampering with code or otherwise trying to fiddle with the process between source code and final package on your system. This effort constitutes a tremendous amount of work, but its massively important.
- Building a web server in aarch64 assembly to give my life (a lack of) meaning!
ymawky is a small, static http web server written entirely in aarch64 assembly for macos. it uses raw darwin syscalls with no libc wrappers, serves static files, supports GET, HEAD, PUT, OPTIONS, DELETE, byte ranges, directory listing, custom error pages, and tries to be as hardened as possible. why? why not? the dream of the 80s is alive in ymawky. everybody has nginx. having apache makes you a square. so why not strip every single convenience layer that computer science has given us since 1957? i wanted to understand how a web server actually works, something i know little about coming from a low-level/systems background. the risks that come up, the problems that need to be solved, the things you don’t think about when you’re writing python or c. this (probably) won’t replace nginx, but it is doing something in the most difficult way possible. ↫ Tony imtomt! I love this.
- Object oriented programming in Ada
Ada is incredibly well designed. One way this shows is that it takes the big, monolithic features of other languages and breaks them down into their constituent parts, so we can choose which portions of those features we want. The example I often reach for to explain this is object-oriented programming. ↫ Christoffer Stjernlöf Exactly what it says on the tin.
- Sculpt OS 26.04 released
Sculpt OS, the operating system based on the various components that make up Genode, has seen a new release, 26.04. A lot of the new features and changes to Genode that weve been talking about for a while now are part of this release, most notably the new human-inclined data syntax that replaces XML as the configuration language for Genode. Thats not the only major improvement, though. Regarding technical advances of the new version and device support in particular, all Linux-based drivers have been updated to kernel version 6.18, making the system compatible with most modern Intel-PC hardware. Laptop users may appreciate the new USB networking option that is now offered by default. Software-wise, the new version comes with a longed-after update of Qt6 along with the Chromium-based Falkon browser, downloadable at the depot of cproc. In the same menu, one can find the experimental first version of the Goa SDK running natively on Sculpt OS without the need of a Linux VM. For the first time, Genode components can now be developed, compiled, and tested using Sculpt OS on its own. The amazement of walking without crutches. ↫ Sculpt OS 26.04 release notes This new release is available for common PC hardware, the PinePhone, and the MNT Reform.
- Sprite scaling on the Master System: building the new on the ruins of the old
Sprite scaling. It is the coolest effect of the 2D arcade era, a must-have for games from Space Harrier to Real Bout Fatal Fury Special. Home consoles pretty much lacked it– sorry, Nintendo, but Mode 7 only scales a background, not sprites. So therefore you might be surprised to hear that Sega’s plucky underdog Master System could do it. Well, don’t get your hopes up; this is far too limited– calling it scaling is overstating things. But let’s dig in anyway! ↫ Nicole Branagan Nicole Branagan has the best articles on obscure console features, and this one is no exception.
- Google is tying reCAPTCHA to Google Play Services, screwing over de-Googled Android users
The ways in which Google can lock you into their ecosystem are often obvious, but sometimes, theyre incredibly sneaky and easily missed. CAPTCHA tests are annoying, but at the same time, they can help protect websites from bots. While these tests are already the bane of our internet existence, they are going to get worse for some Android users. A requirement for Google’s next-generation reCAPTCHA system will make it a lot harder for de-Googled phones to browse the web. A Reddit user has highlighted a seemingly innocuous support page for Google’s reCAPTCHA system. The page in question relates to troubleshooting reCAPTCHA verification on mobile. In the document, it says that you’ll need to use a compatible mobile device to complete verification. If you have an Android phone, then that means you’ll need to be running Google Play Services version 25.41.30 or higher. ↫ Ryan McNeal at Android Authority When was the last time you actively thought about reCAPTCHA being a Google property? Even then, when was the last time you imagined something as annoying but ultimately basic as a captcha prompt could be used to tie people to Google Play Services, and thus to blessed! Android? Every time we manage to work around one of these asinine ties to Google Play Services, another one pops up to ruin our day. Were so stupidly tied down to and entirely dependent on two very mid at best mobile operating systems, and its such a stupid own goal for especially everyone outside of the US to just sit there and do nothing about it. Worse yet, it seems were only tying ourselves down further, while paying for the privilege. At the very least we should be categorising certain services government ID services, payment services, popular messaging platforms, and a few more as vital infrastructure, and legally mandate these services have clearly defined and well-documented APIs so anyone is free to make alternative clients. The fact that many people are tied to either iOS or blessed! Android because of something as stupid as what bank they use or the level of incompetency of their government ID service should be a major crisis in any country that isnt the US. I dont want to use iOS or Android, but nobody is leaving me any choice. Its infuriating.
- Why don’t lowercase letters come right after uppercase letters in ASCII?
With that context, I always found it strange that the designers of ASCII included 6 characters after uppercase Z before starting the lowercase letters. Then it hit me: we have 26 letters in the English alphabet, plus 6 additional characters before lowercase starts: 26 + 6 = 32. If you know anything about computers, powers of 2 tend to stick out. Let’s take a look at the binary representations of some characters compared to their lowercase counterparts. ↫ Tyler Hillery I only have a middling understanding of the rest of the article and thus the ultimate reason why ASCII includes those six characters between Z and a, but I think it comes down to making certain operations on uppercase and lowercase letters specifically more elegant. In some deep crevices of my brain all of this makes sense, but I find it very difficult to truly understand and explain as someone who knows little about programming.
- Detecting (or not) the use of -l and -c together in Bourne shells
Many Bourne shells go slightly beyond the POSIX sh specification to also support a -l option that makes the shell act as a login shell. POSIXs omission of -l isnt only because it doesnt really talk about login shells at all, its also because Unix has a special way of marking login shells that goes back very far in its history. The -l option isnt necessarily what login and sshd and so on use, its something that you can use if you specifically want to get a login shell in an unusual circumstance. Bourne shells also have a -c `command stringb option that causes the shell to execute the command string rather than be interactive (this is a long standing option that is in POSIX). It may surprise you to hear that most or all Bourne shells that support -l also allow you to use -l and -c together. Basically all Bourne shells interpret this as first executing your .profile and so on, then executing the command string instead of going interactive. One use for this is to non-interactively run a command line in the context of your fully set up shell, with $PATH and other environment variables ready for use. ↫ Chris Siebenmann Now, what if you want to detect the use of these two options combined, for instance to make it so certain parts of your .profile are ignored? It turns out very few Bourne shells actually support this, and thats what Siebenmanns latest post is about.
- Fedora Project Leader says he doesnt care about the reputational damage from Fedora embracing AI!
On the Fedora forums, theres a long-running thread about a proposal for Fedora to build a variant of the distribution aimed specifically at AI!. The problem! identified in the proposal is that setting up the various parts that a developer in the AI! space needs is currently quite difficult on Fedora, and as such, a bunch of technical steps need to be taken to make this easier. Setting aside the AI! of the proposal and ensuing discussion, its actually a very interesting read, going deep into the weeds about consequential questions like building an LTS kernel on Fedora, support for out-of-tree kernel mods, and a lot more. To spoil the ending: the proposal has already been approved unanimously by the Fedora Council, meaning the efforts laid out in the proposal will be undertaken. This means that, depending on progress, well see a Fedora AI! Desktop or whatever its going to be called somewhere in the timeframe from Fedora 45 to Fedora 47. As a Fedora user on all my machines, Im obviously not too happy about this, since Id much rather the scarce resources of a project like Fedora goes towards things not as ethically bankrupt, environmentally destructive, and artistically deficient as AI!, but in the end its a project owned and controlled by IBM, so its not exactly unexpected. What really surprised me in this entire discussion is a post by Fedora Project Leader Jef Spaleta, responding to worries people in the thread were having about such a big AI! undertaking under the Fedora branding causing serious reputational damage to Fedora as a whole. These concerns are clearly valid, as people really fucking hate AI!, doubly so in the open source community whose work especially AI! coding tools are built on without any form of consent. As such, Fedora undertaking a big AI! desktop project is bound to have a negative impact on Fedoras image. Just look at what aggressively pushing Copilot has done to Windows 11s already shit reputation. Spaleta, however, just doesnt care. Literally. As the Fedora Project Leader, I am absolutely not concerned about the reputational damage to this project that comes with setting up an entirely new output attractive to developers who want to make use of Ai tools. ↫ Jef Spaleta Ive been looking at this line on and off for a few days now, and I just cant wrap my head around how the leader of an open source project built on and relying on the free labour of thousands of contributors says he doesnt care about reputational damage to the project hes leading. Effective and capable open source contributors are not exactly a commodity, and a lot of the decisions they make about what projects to donate their time to are based on vibes and personal convictions you cant really pay them to look the other way. Saying you dont care about reputational damage to your huge open source project seems rather shortsighted, but of course, I dont lead a huge open source project so what do I know? In the linked thread alone, one long-time Fedora contributor, Fernando Mancera, already decided to leave the project on the spot, and I have a sneaking suspicion he wont be the last. AI! is a deeply tainted hype on many levels, and the more you try to chase this dragon, the more capable people youll end up chasing away.
- Redox gets partial window pixel updating, tmux, and more
Another month, another progress report, Redox, etc. etc., you know the drill by now. This past month Redox saw improved booting on real hardware by making sure the boot process continues even if certain drivers fail or become blocked. Thanks to some changes on the RISC-V side, running Redox on real RISC-V hardware has also improved. Furthermore, tmux has been ported to Redox, CPU time reporting has been improved, and Orbital, Redox desktop environment, gianed support for partial window pixel updating, which should increase UI performance. On top of that, theres a brand new web user interface to browse Redox packages (x86-64, i586, ARM64 (aarch64), and RISC-V (riscv64gc)), as well as the usual list of improvements to the kernel, drivers, relibc, and many more areas of the operating system.
- Setting up a Sun Ray server on OpenIndiana Hipster 2025.10
Time for another Sun Ray blog post! Ive had a few people email me asking for help setting up a Sun Ray server over the last few months, and despite my attempts to help them get it going theres been mixed results with running SRSS on OpenIndiana Hipster 2025.10. my Sun Ray server is still on an earlier OI snapshot, so I figured it was about time to try to actually follow the new guides myself. ↫ The Iris System Ever since my spiraling down the Sun rabbit hole late last year, Ive tried for a few times now to get the x86 version of OpenIndiana and Oracle Solaris working on any of my machines, exactly for the purposes of setting up a modern Sun Ray server. Sadly, none of my machines are compatible with any illumos distribution or Oracle Solaris, so Ive been shit out of luck trying to get this side project off the ground. My Ultra 45 is sadly also not supported by any SPARC version of illumos or Oracle Solaris, so unless I buy even more hardware, my dream of a modern Sun Ray setup will have to wait. Of course, virtualisation is an option for many, and thats exactly what this particular guide is about: setting up OpenIndiana on a Proxmox virtual machine. I actually have a Proxmox machine up and running and could do this too, but Im a sucker for running stuff like this on real hardware. Yes, that makes my life more complicated and difficult, and no, its not more noble or real or hardcore its just a preference. Still, for normal people who pick up a Sun Ray or two on eBay for basically nothing, running OpenIndiana in a virtual machine is the smart, reasonable, and effective option.
- My favorite device is a Chromebook, without ChromeOS!
If youre sick of Chrome OS on your Chromebook, or can find a Chromebook for cheap somewhere but dont actually want to use Chrome OS, have you considered postmarketOS? Since I was kind frustrated with ChromeOS, I decided to take a look at something that I knew supported my Lenovo Duet 3 for some time: postmarketOS. For those who dont know, postmarketOS is an Alpine Linux based-distro focused in replacing the original OS from old phones (generally running Android) with a true! Linux distro. They also seem to support some Chromebooks because of their unique architecture and, luckily, they support my device under the google-trogdor platform. ↫ kokada PostmarketOS is aimed at smartphones primarily, but supports other formfactors just fine as well. The Duet 3 is one of the tablet-like devices it supports, and it seems most things are working quite well. In fact, judging by the postmarketOS wiki, quite a few Chromebooks have good support, and with Chromebooks being cheap and dime-a-dozen on eBay and similar auction sites, it seems like a great way to get started with what is trying to become a true Linux for smartphones.
- The text mode lie: why modern TUIs are a nightmare for accessibility
There is a persistent misconception among sighted developers: if an application runs in a terminal, it is inherently accessible. The logic assumes that because there are no graphics, no complex DOM, and no WebGL canvases, the content is just raw ASCII text that a screen reader can easily parse. The reality is different. Most modern Text User Interfaces (TUIs) are often more hostile to accessibility than poorly coded graphical interfaces. The very tools designed to improve the Developer Experience (DX) in the terminal—frameworks like Ink (JS/React), Bubble Tea (Go), or tcell—are actively destroying the experience for blind users. ↫ Casey Reeves The core reason should be obvious: the command-line interface, at its core, is just a stream of data with the newest data at the bottom, linearly going back in time as you go up. Any screen reader can deal with this fairly easily, and while I personally have no need for such a tool, Ive heard from those that do that kernel-level screen readers are quite good at what they do. TUIs, or text-based user interfaces, made with modern frameworks are actually very different: theyre 2D grid of pixels, where every character cell is a pixel. abandons the temporal flow for a spatial layout.! It should become immediately obvious that screen readers wont really know what to do with this, and Reeves gives countless examples, but the short version is this: the cursor jumps all over the place with every screen update, which makes screen readers go nuts. Various older TUIs, made in a time well before these modern TUI frameworks came about, were designed in a much more terminal-friendly way, or give you options to hide the cursor to solve the problem that way. Irssi, for example, uses VT100 scrolling regions instead of redrawing the whole screen every time something changes. I had never really stopped to think about TUIs and screen readers, as is common among us sighted people. The problems Reeves describes seem to stem not so much from TUIs being inherently inaccessible, but from modern frameworks not actually making use of the terminals core feature set. I really hope this Reeves article shines a light on this problem, and that the people developing these modern TUIs start taking accessibility more seriously.

- Linux 7.1-rc2 Released with Driver Fixes, Steam Deck OLED Audio Repair, and Growing AI Patch Trends
by George Whittaker Linus Torvalds has officially released Linux kernel 7.1-rc2, the second release candidate in the Linux 7.1 development cycle. While Torvalds described the update as a “fairly normal” RC release, the kernel includes a broad collection of driver fixes, subsystem cleanups, and stability improvements that continue shaping the next major Linux kernel release.
Although still an early testing version intended mainly for developers and enthusiasts, Linux 7.1-rc2 already delivers several notable fixes—especially for graphics hardware, networking, and gaming devices like the Steam Deck OLED. A Strange-Looking Release—But for a Good Reason One of the first things Torvalds mentioned in the release announcement was the unusually large patch statistics. At first glance, the release appears much larger than expected, but there’s an explanation behind the inflated numbers.
Much of the activity comes from a large cleanup effort in the KVM selftests subsystem, where developers renamed variables and types to better match Linux kernel coding conventions. Because thousands of lines were renamed rather than fundamentally rewritten, the patch count looks dramatic even though the underlying functional changes are relatively modest.
Torvalds specifically advised testers not to overreact to the “big and strange” diff statistics. Graphics and Driver Fixes Take Center Stage As is common during early release candidates, a large portion of the work in Linux 7.1-rc2 focuses on hardware drivers. GPU and networking drivers account for a significant share of the meaningful fixes in this release.
Notable improvements include: Additional fixes for AMD GPU support Intel Xe graphics driver adjustments and tuning Networking stability improvements Filesystem fixes, including NTFS driver updates Memory leak patches and race-condition corrections These kinds of updates are critical during the RC phase because they help stabilize hardware compatibility before the final release reaches mainstream distributions. Steam Deck OLED Audio Finally Gets Fixed One of the more interesting fixes in Linux 7.1-rc2 addresses a long-standing issue affecting the Steam Deck OLED. According to reports, audio support for Valve’s handheld had been broken in the mainline Linux kernel for nearly two years, forcing Valve and some handheld-focused distributions to carry their own downstream patches and workarounds.
With Linux 7.1-rc2, an upstream fix for the audio issue has finally landed, potentially simplifying support for Linux gaming handhelds moving forward.
For Linux gamers and portable gaming enthusiasts, this is one of the more practical improvements included in the release candidate. Go to Full Article
- LibreOffice 26.4 Beta Experiments with AI Writing Features and Smarter Editing Tools
by George Whittaker The upcoming LibreOffice 26.4 Beta is introducing early AI-powered writing capabilities, signaling a new direction for the open-source office suite. While LibreOffice has traditionally focused on privacy, local processing, and open standards, the beta release shows that The Document Foundation is now exploring how artificial intelligence can assist users without fully embracing cloud-dependent ecosystems.
The result is a cautious but notable step toward AI-enhanced productivity on Linux and other desktop platforms. AI Writing Assistance Comes to LibreOffice One of the biggest additions connected to LibreOffice 26.4 Beta is expanded support for AI-assisted writing tools through integrations such as WritingTool, an open-source LibreOffice extension designed to enhance editing workflows.
These AI features focus on practical writing assistance rather than aggressive automation. Current capabilities include: Grammar and style suggestions Paragraph rewriting and refinement Text expansion and summarization Translation assistance AI-assisted content generation Unlike many proprietary AI platforms, these tools can operate using local AI models, allowing users to avoid sending documents to external cloud services. A Privacy-Focused Approach to AI LibreOffice’s AI direction differs from the strategies used by many commercial office suites. Instead of tightly integrating mandatory cloud AI services, the project appears focused on: Optional AI functionality User-controlled integrations Support for local inference servers Compatibility with self-hosted AI solutions The WritingTool project specifically highlights support for local AI backends and OpenAI-compatible APIs, including self-hosted tools like LocalAI.
This approach aligns closely with the values of many Linux and open-source users who prioritize privacy and transparency. What AI Tools Can Actually Do The AI writing features currently being tested are aimed at improving productivity rather than replacing human writing entirely.
Examples include: Grammar and Style Improvements AI can analyze text for readability, awkward phrasing, and stylistic consistency. Paragraph Rewriting Users can ask the assistant to: Simplify text Make writing more formal or casual Expand short sections Rephrase unclear sentencesContent Assistance The tools can also help generate outlines, draft paragraphs, or suggest alternative wording for documents. Go to Full Article
- Linux Foundation Launches Open Driver Initiative to Strengthen Hardware Support Across Linux
by George Whittaker The Linux Foundation has announced a new Open Driver Initiative, a collaborative effort aimed at improving the development, maintenance, and long-term sustainability of open-source hardware drivers across the Linux ecosystem.
The initiative reflects growing demand for better hardware compatibility in areas ranging from desktops and gaming systems to cloud infrastructure, automotive platforms, AI hardware, and next-generation networking. As Linux expands into more industries and devices, driver quality and openness have become increasingly important. Why Open Drivers Matter Hardware drivers are the bridge between the operating system and physical components such as: Graphics cards Wi-Fi adapters Storage controllers Network devices Embedded and automotive systems When drivers are open source, developers can: Improve compatibility more quickly Audit code for security issues Maintain support for older hardware longer Integrate drivers more cleanly into the Linux kernel Open drivers also reduce dependence on proprietary vendor software, which can become outdated or unsupported over time. What the Open Driver Initiative Aims to Do According to early details surrounding the Linux Foundation’s broader infrastructure efforts, the initiative is designed to encourage: Shared driver development standards Better collaboration between hardware vendors and kernel maintainers Open governance models for driver ecosystems Improved testing, validation, and long-term maintenance The effort appears aligned with the Linux Foundation’s long-standing role as a neutral organization coordinating open-source collaboration across industries. A Push for Industry-Wide Collaboration The initiative arrives at a time when Linux is increasingly used in: AI and high-performance computing Automotive and software-defined vehicles Telecommunications and Open RAN infrastructure Embedded devices and edge computing Several Linux Foundation-hosted projects already emphasize open infrastructure and hardware collaboration, including Automotive Grade Linux (AGL) and networking initiatives focused on open radio access networks.
By launching a dedicated effort around drivers, the Linux Foundation is attempting to reduce fragmentation and improve interoperability across hardware ecosystems. Why This Matters for Linux Users For everyday Linux users, better open driver support can lead to: Go to Full Article
- Canonical Unveils Ubuntu AI Strategy: Local Models, User Control, and Smarter Workflows
by George Whittaker Canonical has officially revealed its long-anticipated plans to bring artificial intelligence features into Ubuntu, marking a significant shift for one of the world’s most widely used Linux distributions. Rather than rushing into the AI wave, Canonical is taking a measured, privacy-focused approach, one that aims to enhance the operating system without compromising its open-source values.
The rollout is expected to take place gradually throughout 2026, with early features likely appearing in upcoming Ubuntu releases. A Gradual, Thoughtful AI Rollout Canonical isn’t positioning Ubuntu as an “AI-first” operating system. Instead, the company is introducing AI in stages, focusing on practical improvements rather than hype-driven features.
The plan follows a two-phase model: Implicit AI features: Enhancements running quietly in the background Explicit AI features: User-facing tools and workflows powered by AI This approach allows Ubuntu to evolve naturally, improving existing functionality before introducing more advanced capabilities. Local AI First, Not the Cloud One of the most important aspects of Canonical’s strategy is its emphasis on local AI processing, also known as on-device inference.
Instead of sending data to remote servers, Ubuntu will aim to: Run AI models directly on the user’s hardware Reduce reliance on cloud services Improve privacy and performance Canonical has made it clear that local inference will be the default, with cloud-based options available only when explicitly chosen by the user.
This aligns closely with the privacy expectations of Linux users, who often prefer greater control over their data. What AI Features Could Look Like Canonical has outlined several potential use cases for AI inside Ubuntu. These include: Accessibility Improvements AI will enhance tools like: Speech-to-text Text-to-speech Assistive technologies These features aim to make Ubuntu more inclusive and easier to use for a wider range of users. Smarter System Assistance Future AI features may help users: Troubleshoot system issues Interpret logs and error messages Automate repetitive tasks This could significantly lower the learning curve for new Linux users. Agent-Based Automation Canonical is also exploring “agentic” AI workflows, where AI can take actions on behalf of the user.
Examples include: Go to Full Article
- Thunderbird 150 Lands on Linux: Smarter Encryption, Better Tools, and a Polished Experience
by George Whittaker Mozilla has officially rolled out Thunderbird 150.0, the latest version of its open-source email client, bringing a mix of security-focused enhancements, usability upgrades, and workflow improvements for Linux and other platforms. Released in April 2026, this update continues Thunderbird’s steady evolution as a powerful desktop email solution.
For Linux users, Thunderbird 150 delivers meaningful updates that improve both everyday usability and advanced email handling, especially for encrypted communication. Stronger Support for Encrypted Email One of the standout improvements in Thunderbird 150 is how it handles encrypted messages.
Users can now: Search inside encrypted emails (OpenPGP and S/MIME) Generate “unobtrusive” OpenPGP signatures that appear cleaner to recipients These changes make encrypted communication far more practical, especially for users who rely on secure email for work or privacy-sensitive tasks. New Productivity and Workflow Features Thunderbird 150 introduces several small but impactful workflow improvements: A new Account Hub opens automatically on first launch, simplifying setup Recent Destinations in settings can now be sorted alphabetically Address book entries can be copied as vCard files A new custom accent color option allows interface personalization These updates make Thunderbird easier to configure and more flexible to use daily. Improved Built-In PDF Viewer Thunderbird’s integrated PDF viewer gets a useful upgrade: users can now reorder pages directly within the viewer.
This is particularly helpful for: Managing attachments without external tools Editing documents quickly before sending Streamlining email-based workflows Combined with ongoing security fixes, the PDF viewer becomes both more capable and safer. Calendar and Interface Enhancements Several improvements focus on usability and accessibility: Calendar views now support touchscreen scrolling Fixed issues with calendar layouts and navigation Better screen reader support and accessibility fixes General UI refinements across the application These changes contribute to a smoother, more consistent user experience across devices. Bug Fixes and Stability Improvements Thunderbird 150 also resolves a wide range of issues, including: Go to Full Article
- Linux Kernel 6.19 Reaches End of Life: Time to Move Forward
by George Whittaker The Linux kernel continues its fast-paced release cycle, and with that comes an important milestone: Linux kernel 6.19 has officially reached end of life (EOL). For users and distributions still running this branch, it’s now time to upgrade to a newer kernel version.
This isn’t unexpected, Linux 6.19 was never intended to be a long-term release, but it does serve as a reminder of how quickly non-LTS kernel branches move through their lifecycle. Official End of Support The final update in the 6.19 series, Linux 6.19.14, has been released and marked as the last maintenance version. Kernel maintainer Greg Kroah-Hartman confirmed that no further updates will follow, stating that the branch is now officially end-of-life.
On kernel.org, the 6.19 series is now listed as EOL, meaning it will no longer receive bug fixes or security patches. Why 6.19 Had a Short Lifespan Unlike some kernel releases, Linux 6.19 was not a long-term support (LTS) version. Short-lived kernel branches are typically supported for only a few months before being replaced by newer releases.
Linux follows a rapid development model: New major versions are released frequently Short-term branches receive limited updates Only selected kernels are designated as LTS for extended support Because of this, 6.19 was always meant to be a stepping stone rather than a long-term foundation. What Users Should Do Now With 6.19 no longer maintained, continuing to use it poses risks, especially in environments where security and stability matter.
Recommended upgrade paths include: Upgrade to Linux 7.0 The most direct path forward is the Linux 7.0 kernel series, which succeeds 6.19 and introduces new hardware support and ongoing fixes.
This is a good option for: Desktop users Rolling-release distributions Users who want the latest featuresSwitch to an LTS Kernel For production systems, servers, or long-term stability, moving to an LTS kernel is often the better choice.
Current LTS options include: Linux 6.18 LTS (supported until 2028) Linux 6.12 LTS (supported until 2028) Linux 6.6 LTS (supported until 2027) These versions receive ongoing security updates and are better suited for stable environments. Why EOL Matters When a kernel reaches end of life: Go to Full Article
- Archinstall 4.2 Shifts to Wayland-First Profiles, Leaving X.Org Behind
by George Whittaker The Arch Linux installer continues evolving alongside the broader Linux desktop ecosystem. With the release of Archinstall 4.2, a notable change has arrived: Wayland is now the default focus for graphical installation profiles, while traditional X.Org-based profiles have been removed or deprioritized.
This move reflects a wider transition happening across Linux, one that is gradually redefining how graphical environments are built and used. A Turning Point for Archinstall Archinstall, the official guided installer for Arch Linux, has steadily improved over time to make installation more accessible while still maintaining Arch’s minimalist philosophy.
With version 4.2, the installer now aligns more closely with modern desktop trends by emphasizing Wayland-based environments during setup, instead of offering traditional X.Org configurations as first-class options.
This doesn’t mean X.Org is completely gone from Arch Linux, but it does signal a clear shift in direction. Why Wayland Is Taking Over Wayland has been gaining traction for years as the successor to X.Org, offering a more streamlined and secure approach to rendering graphics on Linux.
Compared to X.Org, Wayland is designed to: Reduce complexity in the graphics stack Improve security by isolating applications Deliver smoother rendering and better performance Support modern display technologies like high-DPI and variable refresh rates As the Linux ecosystem evolves, many distributions and desktop environments are prioritizing Wayland as the default display protocol. What Changed in Archinstall 4.2 With this release, users installing Arch through Archinstall will notice: Wayland-based desktop environments and compositors are now the primary options X.Org-centric setups are no longer emphasized in guided profiles Installation workflows better reflect modern Linux defaults This simplifies the installation experience for new users, who no longer need to choose between legacy and modern display systems during setup. What About X.Org? While Archinstall is moving forward, X.Org itself is not disappearing overnight.
Many applications and workflows still rely on X11, and compatibility is maintained through XWayland, which allows X11 applications to run within Wayland sessions.
For advanced users, Arch still provides full flexibility: Go to Full Article
- OpenClaw in 2026: What It Is, Who’s Using It, and Whether Your Business Should Adopt It
by George Whittaker “probably the single most important release of software, probably ever.”
— Jensen Huang, CEO of NVIDIA
Wow! That’s a bold statement from one of the most influential figures in modern computing.
But is it true? Some people think so. Others think it’s hype. Most are somewhere in between, aware of OpenClaw, but not entirely sure what to make of it. Are people actually using it? Yes. Who’s using it? More than you might expect. Is it experimental, or is it already changing how work gets done? That depends on how it’s being applied. Is it more relevant for businesses or consumers right now? That’s one of the most important, and most misunderstood, questions.
This article breaks that down clearly: what OpenClaw is, how it works, who is using it today, and where it actually creates value.
What makes OpenClaw different isn’t just the technology, it’s where it fits. Most of the AI tools people are familiar with still require a human to take the next step. They assist, but they don’t execute. OpenClaw changes that dynamic by connecting decision-making directly to action. Once you understand that shift, the rest of the discussion, who’s using it, how it’s being deployed, and where it creates value, starts to make a lot more sense.
Top 10 Questions About OpenClaw What is OpenClaw?
OpenClaw is an open-source AI agent framework that enables large language models like Claude, GPT, and Gemini to execute real-world tasks across software systems, including APIs, files, and workflows.
What does OpenClaw actually do?
OpenClaw functions as an execution layer that allows AI systems to take actions, such as sending emails, updating CRM records, or running scripts, instead of only generating responses.
Do you need to be a developer to use OpenClaw?
No, but technical familiarity helps. Non-developers can use prebuilt workflows, while developers can customize and scale implementations more effectively.
Is OpenClaw more suited for business or consumer use?
OpenClaw is currently more suited for business and technical use cases where structured workflows exist. Consumer use is emerging but remains secondary.
How is OpenClaw different from ChatGPT or Claude?
ChatGPT and Claude generate outputs, while OpenClaw enables those outputs to trigger actions across connected systems.
Who created OpenClaw? Go to Full Article
- Linux Kernel Developers Adopt New Fuzzing Tools
by George Whittaker The Linux kernel development community is stepping up its security game once again. Developers, led by key maintainers like Greg Kroah-Hartman, are actively adopting new fuzzing tools to uncover bugs earlier and improve overall kernel reliability.
This move reflects a broader shift toward automated testing and AI-assisted development, as the kernel continues to grow in complexity and scale. What Is Fuzzing and Why It Matters Fuzzing is a software testing technique that feeds random or unexpected inputs into a program to trigger crashes or uncover vulnerabilities.
In the Linux kernel, fuzzing has become one of the most effective ways to detect: Memory corruption bugs Race conditions Privilege escalation flaws Edge-case failures in subsystems Modern fuzzers like Syzkaller have already discovered thousands of kernel bugs over the years, making them a cornerstone of Linux security testing. New Tools Enter the Scene Recently, kernel maintainers have begun experimenting with new fuzzing frameworks and tooling, including a project internally referred to as “clanker”, which has already been used to identify multiple issues across different kernel subsystems.
Early testing has uncovered bugs in areas such as: SMB/KSMBD networking code USB and HID subsystems Filesystems like F2FS Wireless and device drivers The speed at which these issues were discovered suggests that these new tools are significantly improving bug detection efficiency. AI and Smarter Fuzzing Techniques One of the most interesting developments is the growing role of AI and machine learning in fuzzing.
New research projects like KernelGPT use large language models to: Automatically generate system call sequences Improve test coverage Discover previously hidden execution paths These techniques can enhance traditional fuzzers by making them smarter about how they explore the kernel’s behavior.
Other advancements include: Better crash analysis and deduplication tools (like ECHO) Configuration-aware fuzzing to explore deeper kernel states Feedback-driven fuzzing loops for improved coverage Together, these innovations help developers focus on the most meaningful bugs rather than sifting through duplicate reports. Why This Shift Is Happening Now The Linux kernel is one of the most complex software projects in existence. With millions of lines of code and contributions from thousands of developers, manually catching every bug is nearly impossible. Go to Full Article
- GNOME 50 Reaches Arch Linux: A Leaner, Wayland-Only Future Arrives
by George Whittaker Arch Linux users are among the first to experience the latest GNOME desktop, as GNOME 50 has begun rolling out through Arch’s repositories. Thanks to Arch’s rolling-release model, new upstream software like GNOME arrives quickly, giving users early access to the newest features and architectural changes.
With GNOME 50, that includes one of the most significant shifts in the desktop’s history. A Major GNOME Milestone GNOME 50, officially released in March 2026 under the codename “Tokyo,” represents six months of development and refinement from the GNOME community.
Unlike some previous versions, this release focuses less on dramatic redesigns and more on strengthening the foundation of the desktop, improving performance, modernizing graphics handling, and simplifying long-standing complexities.
For Arch Linux users, that translates into a more streamlined and future-ready desktop environment. Goodbye X11, Hello Wayland-Only Desktop The headline change in GNOME 50 is the complete removal of X11 support from GNOME Shell and its window manager, Mutter.
After years of gradual transition: X11 sessions were first deprecated Then disabled by default And now fully removed in GNOME 50 This means GNOME now runs exclusively on Wayland, with legacy X11 applications handled through XWayland compatibility layers.
The result is a simpler, more modern graphics stack that reduces maintenance overhead and improves long-term performance and security. Improved Graphics and Display Handling GNOME 50 brings several key improvements to display and graphics performance: Variable Refresh Rate (VRR) enabled by default Better fractional scaling support Improved compatibility with NVIDIA drivers Enhanced HDR and color management These changes aim to deliver smoother animations, more responsive desktops, and better support for modern displays.
For gamers and users with high-refresh monitors, these upgrades are especially noticeable. Performance and Responsiveness Gains Beyond graphics, GNOME 50 includes multiple performance optimizations: Faster file handling in the Files (Nautilus) app Improved thumbnail generation Reduced stuttering in animations Better resource usage across the desktop These refinements make the desktop feel more responsive, particularly on systems with demanding workloads or multiple monitors. New Parental Controls and Accessibility Features GNOME 50 also expands its focus on usability and accessibility. Go to Full Article
|