|
1825 Monetary Lane Suite #104 Carrollton, TX
Do a presentation at NTLUG.
What is the Linux Installation Project?
Real companies using Linux!
Not just for business anymore.
Providing ready to run platforms on Linux
|
Show Descriptions... (Show All)
(Two Column)

- Debian DSA-6248-1 Apache2 Critical RCE Privilege Escalation Risks
Multiple vulnerabilities have been discovered in the Apache HTTP server, which may result in remote code execution, privilege escalation, denial of service or information disclosure. For the oldstable distribution (bookworm), these problems have been fixed in version 2.4.67-1~deb12u2.
- Debian 11 OpenJDK Important Denial of Service Risks DLA-4566-1
Several vulnerabilities have been discovered in the OpenJDK Java runtime, which may result in incorrect generation of cryptographic keys, denial of service, information disclosure, XEE/XEE attacks or incorrect validation of Kerberos credentials. For Debian 11 bullseye, these problems have been fixed in version
- Debian 11 OpenJDK-17 Denial Of Service Info Disclosure DLA-4565-1
Several vulnerabilities have been discovered in the OpenJDK Java runtime, which may result in incorrect generation of cryptographic keys, denial of service, information disclosure, XEE/XEE attacks or incorrect validation of Kerberos credentials. For Debian 11 bullseye, these problems have been fixed in version

- [$] LLM-driven security reports disrupt coordinated disclosure
Predictions that LLM tools would cause a surge in reports of security vulnerabilitieshave, unquestionably, borne out. As expected, maintainers are having to wadethrough more security reports than ever before; in addition, LLM tools aredisrupting traditional-coordinated disclosure practices as well. The method of Copy Fail's disclosure, in particular, leftvendors, projects, and users scrambling. In addition, maintainers are seeingparallel discovery of the same security flaws within the embargo window. Bothof these developments mean that coordinated security disclosures may become athing of the past.
- Incus 7.0 LTS released
Version7.0 of the Incus container andvirtual-machine management system has been released. Notable changes in thisrelease include the inclusion of a low-level backup API, the additionof basic S3 operations directly in Incus to replace the now-unmaintainedMinIO project, as well as the removal of support forcgroups v1 and xtables (iptables/ip6tables/ebtables). This is along-term-support (LTS) release, with support through June 2031.
The first 2 years will feature bug and security fixes as well as minorusability improvements, delivered through occasional point releases(7.0.x). After that initial two years, Incus 7.0 LTS will move to security onlymaintenance for the remaining of its 5 years of support.
A total of 204 individuals contributed to Incus between the 6.0 LTS and 7.0LTS releases with 45 contributing between the 6.23 and 7.0 LTS releases.
- Security updates for Wednesday
Security updates have been issued by AlmaLinux (corosync, dovecot, image-builder, python-tornado, resource-agents, and systemd), Debian (openjdk-11, openjdk-17, and pyjwt), Fedora (pdns, pyOpenSSL, and squid), Slackware (hunspell), SUSE (alloy, avahi, bubblewrap, cmctl, coredns, curl, dpkg, firefox, golang-github-prometheus-prometheus, grafana, libpng12, PackageKit, sed, and xen), and Ubuntu (docker.io-app, nghttp2, python-django, and python-mako).
- [$] Hardware-assisted Arm VMs for s390
A recentpatch set from Steffen Eiden and others has set the groundwork for allowinghardware-assisted emulation of Arm CPUs on s390 CPUs.Version two of the posting fixes a handful of smaller problems, but does notdiffer much.The patches were welcomedby the Arm maintainers, pending some discussion of how the collaboration between thearchitectures could be structured to prevent maintainability problems on the Armside. When those details are resolved, the patches could pave the way fortransparently running Arm-based virtual machines (VMs) on s390 hosts at native ornear-native speeds.
- Security updates for Tuesday
Security updates have been issued by AlmaLinux (kernel, kernel-rt, libcap, LibRaw, openssh, thunderbird, and tigervnc), Debian (libarchive and lxd), Fedora (chromium, insight, nodejs20, rust-sequoia-git, and uriparser), Mageia (kernel, kmod-virtualbox), Oracle (kernel, libcap, thunderbird, and uek-kernel), Red Hat (.NET 10.0, .NET 8.0, .NET 9.0, fence-agents, sudo, and systemd), Slackware (httpd), SUSE (freerdp, hauler, helm, himmelblau, kernel, libspectre, thunderbird, trivy, and xen), and Ubuntu (curl, exim4, and sed).
- The retirement of the PHP license
The PHP project has long shipped under its own license — except forthe parts under the Zend Engine License. The PHP project has now announcedthat the PHP license has been retired, and the PHP code has been relicensedunder the three-clause BSD license. See thisblog entry for more details. Getting here required more than writing an RFC. The PHP License gives the PHP Group the authority to change it, which meant tracking down each of the original PHP Group members and getting their written consent. Each approved the proposal. Perforce Software, the successor to Zend Technologies, needed to sign off on the Zend Engine side, as well. They provided a formal letter confirming their full authority and support for the change. I hired an attorney to review the proposal and provide advice on any legal questions that might surface during the discussion period. Speaking of which, I allowed for a six-month community discussion period preceding the vote, which passed unanimously. LWN covered the license-change process back in March.
- Alpine Linux systems currently offline
The Alpine Linux account on fosstodon.org reportsthat all systems hosted at Linode, including its GitLab instance,"are suspended at the moment due to some billing issue". Theyare working to get it resolved, but in the meantime all of theirservices appear to be down.
Update: Alpine Linux's servers are back online.
- [$] Bug-monitoring expectations and Fedora GNOME packages
For a number of years, users submitting bugs reports against GNOME packages in Fedora havereceived an auto-reply saying that the reports were not activelymonitored; users were encouraged to file bugs with GNOME upstream instead. However,that practice seems to be in conflict with the Fedora Engineering SteeringCommittee (FESCo) policythat package maintainers "deal with reported bugs in a timely manner". OnApril 28, FESCo discussed the disconnect between practice and policy; so far,it has only opted to tweak the wording of the automatic response.
- NetHack 5.0.0 released
Version 5.0.0of the NetHackdungeon-exploration game, a distant relative of Rogue andHack, has been released. NetHack's code is now compliant with theC99 standard, and the release includes more than 3,100bug fixes and changes, detailed in doc/fixes5-0-0.txt(may contain game spoilers). Saved games from previous versions willnot work with NetHack 5.0.0.
- Security updates for Monday
Security updates have been issued by AlmaLinux (kernel, libcap, libtiff, sudo, and thunderbird), Debian (dovecot, imagemagick, incus, kernel, libexif, linux-6.1, openjdk-25, pyasn1, python-aiohttp, and thunderbird), Fedora (chromium, firefox, GitPython, glibc, insight, krb5, nano, nss, openssh, openvpn, perl-CryptX, python3.14, rust-openssl, rust-openssl-sys, rust-sequoia-git, and xen), Oracle (dtrace, fence-agents, grafana-pcp, libcap, libtiff, sudo, and xorg-x11-server-Xwayland), Red Hat (buildah, fence-agents, firefox, java-11-openjdk with Extended Lifecycle Support, LibRaw, nodejs24, nodejs:24, openssh, python-pyasn1, resource-agents, thunderbird, tigervnc, xorg-x11-server, and xorg-x11-server-Xwayland), Slackware (mozilla), and SUSE (avahi, curl, freeipmi, freerdp, google-guest-agent, google-osconfig-agent, gvim, helm, himmelblau, java-1_8_0-openjdk, kernel, krb5-appl-clients, libsodium, libssh, libtiff-devel-32bit, ntfs-3g_ntfsprogs, openCryptoki, openexr, ovmf, PackageKit, python-jwcrypto, python-Mako, python-PyNaCl, python311, python311-pypdf, sed, trivy, and vim).
- Kernel prepatch 7.1-rc2
The second 7.1 kernel prepatch is out fortesting. "It's not small, and while it's a bit early to say for sure, Ido suspect we're seeing the same continued pattern of more patches thanusual - probably due to AI tooling - that we saw in 7.0."
- Eden: NHS goes to war against open source
Terence Eden reportsthat the UK's NationalHealth Service (NHS) is preparing to close almost all of its open-source repositories as aresponse to LLM tools, such as Anthropic's Mythos, becoming moresophisticated at finding security vulnerabilities. He does not, to putit mildly, agree with the decision:
The majority of code repospublished by the NHS are not meaningfully affected by any advancein security scanning. They're mostly data sets, internal tools,guidance, research tools, front-end design and the like. There isnothing in them which could realistically lead to a securityincident.
When I was working at NHSX during the pandemic, we were soconfident of the safety and necessity of open source, we made sure theCovid Contact Tracing app was open sourced the minute it was availableto the public. That was a nationally mandated app, installed onmillions of phones, subject to intense scrutiny from hostile powers -and yet, despite publishing the code, architecture and documentation,the open source code caused zero securityincidents.
Furthermore, this new guidance is in direct contradiction to theUK's TechCode of Practice point 3 "Be open and use open source" whichinsists on code being open.
- [$] Version-controlled databases using Prolly trees
Modern database and filesystems make pervasive use ofB-trees, which are treestructures optimized for storing sorted lists of keys and values on blockdevices.Dolt is an Apache 2.0-licensed project that makes clever use of avariant of a B-tree to support efficient version control for an entire database.The data structure it uses could well be of interest to other projects.
- Security updates for Friday
Security updates have been issued by AlmaLinux (fence-agents), Debian (chromium, dovecot, and kernel), Fedora (chromium, dotnet10.0, dotnet8.0, dotnet9.0, emacs, glow, jfrog-cli, openbao, pyp2spec, python3.6, rust-rustls-webpki, vhs, and xen), Oracle (grafana, grafana-pcp, PackageKit, sudo, vim, and xorg-x11-server), Red Hat (rhc), SUSE (avahi, bouncycastle, chromium, container-suseconnect, firewalld, gdk-pixbuf, grafana, java-25-openjdk, kernel, libixml11, libmozjs-140-0, libpng12-0, libsodium, libssh, mariadb, Mesa, ntfs-3g_ntfsprogs, openCryptoki, openexr, packagekit, prometheus-postgres_exporter, python-jwcrypto, python-mako, python-Pygments, python-pynacl, python311, python311-pyOpenSSL, python315, radare2, sed, and vim), and Ubuntu (kmod and zulucrypt).
- [$] Restartable sequences, TCMalloc, and Hyrum's Law
Hyrum's Law states that anyobservable behavior of a system will eventually be depended upon bysomebody. The kernel community is currently contending with a cleardemonstration of that principle. The recent work to address some restartable-sequencesperformance problems in the 6.19 release maintained the documented APIin all respects, but that was not enough; Google's TCMalloclibrary, as it turns out, violates the documented API, prevents other codefrom using restartable features, and breaks with 6.19. But the kernel'sno-regressions rule is forcing developers to find a way to accommodateTCMalloc's behavior.

- Engicam expands MicroGEA lineup with 25 x 25 mm NXP i.MX 93 module
Engicam has expanded its MicroGEA family with the new MicroGEA MX93, a compact system-on-module based on the NXP i.MX 93 processor. The 25 × 25 mm module combines dual Arm Cortex-A55 cores, LPDDR4X memory, onboard eMMC storage, and industrial temperature support. The launch follows earlier MicroGEA modules based on STM32MP1 processors, continuing the company’s focus […]
- Fedora Yet To Decide On x86_64-v3 Packages For Fedora Linux 45
Last month a Fedora Linux change proposal was shared proposing that Fedora 45 be built with x86_64-v3 packages to complement the generic x86_64 (v1) packages currently being compiled. This has the possibility of providing greater performance out of packaged Fedora software but comes with the cost of greater burdens on web mirrors, QA / testing, and related infrastructure impact. The Fedora Engineering and Steering Committee "FESCo" decided today to wait on coming to a decision over this Fedora 45 change proposal...
- NovaCustom Unveils PrivacyGuard and SecurityTitan Lineup
I’ve written about NovaCustom hardware many times on this blog for three main reasons: its extensive customization options, its open source, privacy-oriented firmware, and its strong focus on sustainability. While NovaCustom still offers all three, it has now introduced a new line of preconfigured models, PrivacyGuard and SecurityTitan, offering a straightforward option with faster delivery for those who prefer to skip the detailed configuration process. If that sounds like you, and privacy and security are high priorities, you might be wondering what these new NovaCustom models are and how they can take your setup to the next level.
- Attackers are cashing in on fresh 'CopyFail' Linux flaw
Researchers dropped a reliable root exploit and it didn’t sit idle for longCISA is warning that a newly-disclosed Linux kernel bug dubbed "CopyFail" is already being exploited, just days after researchers dropped a working root-level exploit.…
- Bug-monitoring expectations and Fedora GNOME packages
Users submitting bug reports about GNOME packages to Fedora have received an auto-reply saying that the reports were not actively monitored. The practice seems to go against Fedora policy; FESCo has decided the auto-reply has to change, but has not decided about actual monitoring.
- Shuttle XPC cube SB860R8 targets workstation workloads with Core Ultra 200 support
Shuttle’s new XPC cube SB860R8 is a 14-liter barebone system supporting Intel Core Ultra 200 series processors. Key features include up to 192 GB DDR5 memory, four 3.5-inch drive bays, PCIe Gen5 expansion, dual 2.5 GbE, and multiple display outputs including HDMI 2.1 with 8K support. The system is built around the LGA1851 socket for […]
- What is /dev/zero in Linux and its Uses
In this article, you will learn about the special file /dev/zero and its various use cases, such as creating a swap file, a dummy file for testing, and formatting the drive for security reasons.

- Morgan Stanley Undercuts Rivals On Pricing In Crypto Trading Debut
Morgan Stanley is adding crypto trading to E*Trade, with a pilot now underway and a broader rollout planned for the platform's 8.6 million customers later this year. The bank is reportedly undercutting rivals with a 50-basis-point trading fee as it bets traditional finance and DeFi will converge. "By contrast, Robinhood Markets' (HOOD) fees start at 95 bps, Coinbase Global's (COIN) begins at 60 bps, and Charles Schwab (SCHW) will charge 75 bps," notes Seeking Alpha. Morgan Stanley's head of wealth management, Jed Finn, told Bloomberg: "This is much bigger than trading crypto at a cheaper rate. In a way, the strategy is disintermediating the disintermediators."
Read more of this story at Slashdot.
- Claude Managed Agents Can Engage In a 'Dreaming' Process To Preserve Memories
An anonymous reader quotes a report from Ars Technica: At its Code with Claude developers' conference, Anthropic has introduced what it calls "dreaming" to Claude Managed Agents. Dreaming, in this case, is a process of going over recent events and identifying specific things that are worth storing in "memory" to inform future tasks and interactions. Dreaming is a feature that is currently in research preview and limited to Managed Agents on the Claude Platform. Managed Agents are a higher-level alternative to building directly on the Messages API that Anthropic describes as a "pre-built, configurable agent harness that runs in managed infrastructure." It's intended for situations where you want multiple agents working on a task or project to some end point over several minutes or hours. Anthropic describes dreaming as a scheduled process, in which sessions and memory stores are reviewed, and specific memories are curated. This is important because context windows are limited for LLMs, and important information can be lost over lengthy projects. On the chat side of things, many models use a process called compaction, whereby lengthy conversations are periodically analyzed, and the models attempt to remove irrelevant information from the context window while keeping what's actually important for the ongoing conversation, project, or task. However, that process, as I described it, is usually limited to a specific conversation with a single agent. "Dreaming" is a periodically recurring process in which past sessions and memory stores can be analyzed across agents, and important patterns are identified and saved to memory for the future. Users will be able to choose between an automatic process, or reviewing changes to memory directly.
Read more of this story at Slashdot.
- ReactOS Unifies Installation Media, Introduces GUI Installer and New ATA Driver
jeditobe writes: Developers of ReactOS told Phoronix that the project has introduced a unified BootCD, replacing its previously separate installation media and LiveCD images. The new image combines the traditional text-mode installer with a LiveCD mode in a single medium. Within this unified BootCD, the updated LiveCD mode now includes an option to launch a first-stage GUI installer. The graphical interface is intended to make installation more approachable for new users compared to the long-standing text-based setup process. In a separate development, the project has also merged a new ATA storage driver that has been in progress since early 2024. The plug-and-play aware storage stack supports SATA, PATA, ATAPI, AHCI, and even SCSI devices, potentially expanding the range of hardware on which ReactOS can successfully boot. Following recent improvements to graphics driver support, the project continues to make incremental progress across core subsystems, though its long development timeline remains a point of discussion. Will these usability and hardware compatibility improvements be enough to broaden ReactOS adoption beyond its current niche? Please note that all new features are not present in version 0.4.15 and are available for testing in the latest nightly test builds.
Read more of this story at Slashdot.
- Zuckerberg 'Personally Authorized and Encouraged' Meta's Copyright Infringement
Five major publishers and author Scott Turow have sued Meta and Mark Zuckerberg, alleging that Zuckerberg "personally authorized and actively encouraged" massive copyright infringement by using pirated books, journal articles, and web-scraped material to train Meta's Llama AI systems. Meta denies wrongdoing and says it will fight the case, arguing that courts have recognized AI training on copyrighted material as potentially fair use. Variety reports: "In their effort to win the AI 'arms race' and build a functional generative AI model, Defendants Meta and Zuckerberg followed their well-known motto: 'move fast and break things,'" the plaintiffs say in their lawsuit. "They first illegally torrented millions of copyrighted books and journal articles from notorious pirate sites and downloaded unauthorized web scrapes of virtually the entire internet. They then copied those stolen fruits many times over to train Meta's multibillion-dollar generative AI system called Llama. In doing so, Defendants engaged in one of the most massive infringements of copyrighted materials in history." The suit was filed Tuesday (May 5) in the U.S. District Court for the Southern District of New York by five publishers (Hachette, Macmillan, McGraw Hill, Elsevier and Cengage) and Turow individually. The proposed class-action suit seeks unspecific monetary damages for the alleged copyright infringement. A copy of the lawsuit is available at this link (PDF). [...] the latest lawsuit alleges that Meta and Zuckerberg deliberately circumvented copyright-protection mechanisms -- and had considered paying to license the works before abandoning that strategy at "Zuckerberg's personal instruction." The suit essentially argues that the conduct described falls outside protections afforded by fair-use provisions of the U.S. copyright code.
Read more of this story at Slashdot.
- Silicon Valley Bets $200 Million On AI Data Centers Floating In the Ocean
An anonymous reader quotes a report from Ars Technica: Silicon Valley investors such as Palantir co-founder Peter Thiel have bet hundreds of millions of dollars on deploying AI data centers powered by waves in the middle of the world's oceans -- a move that coincides with tech companies facing mounting challenges in building AI data center projects on land. The latest investment round of $140 million is intended to help the company Panthalassa complete a pilot manufacturing facility near Portland, Oregon, and speed up deployments of wave-riding "nodes" designed to generate electrical power, according to a May 4 press release. Instead of sending renewable energy to a land-based data center, the floating nodes would directly power onboard AI chips and transmit inference tokens representing the AI models' outputs to customers worldwide via satellite link. Each node resembles a huge steel sphere bobbing on the water with a tube-like structure extending vertically down beneath the surface. The wave motions drive water upward through the tube into a pressurized reservoir, where it can be released to spin a turbine generator that produces renewable energy for the AI chips on board. Panthalassa claims the node's AI chips would also get cooled using the surrounding water, which could offer another advantage over traditional data centers. "Ocean-based compute might offer a massive cooling advantage because the ambient temperature is so low," Lee said. "Land-based data centers use a lot of electricity and fresh water for cooling." The newest node prototype, called Ocean-3, is scheduled for testing in the northern Pacific Ocean later in 2026. The latest version reaches about 85 meters in length and would stand nearly as tall as London's Big Ben or New York City's Flatiron Building, according to the Financial Times. Panthalassa has already tested several earlier prototypes of the wave energy converter technology, including the Ocean-1 in 2021 and the Ocean-2 that underwent a three-week sea trial off the coast of Washington state in February 2024. The company's CEO and co-founder, Garth Sheldon-Coulson, said in a CBS interview that he hopes to eventually deploy thousands of the nodes.
Read more of this story at Slashdot.
- Microsoft Gives Up On Xbox Copilot AI
Microsoft is winding down Xbox Copilot on mobile and ending development of Copilot on console, reversing plans to bring the gaming-focused AI assistant to current-generation Xbox consoles this year. "The move follows [new Xbox CEO Asha Sharma's] reorganization of the Xbox platform team earlier on Tuesday, which added executives from Microsoft's CoreAI team -- where Sharma worked before taking over Xbox -- to the Xbox side of the company," reports The Verge. Sharma said in a post on X: Xbox needs to move faster, deepen our connection with the community, and address friction for both players and developers. Today, we promoted leaders who helped build Xbox, while also bringing in new voices to help push us forward. This balance is important as we get the business back on track. As part of this shift, you'll see us begin to retire features that don't align with where we're headed. We will begin winding down Copilot on mobile and will stop development of Copilot on console. Since taking over for former Microsoft Gaming CEO Phil Spencer in February, Sharma has scrapped the Microsoft Gaming brand and cut the price of Xbox Game Pass.
Read more of this story at Slashdot.
- White House App Is a Terrifying Security Mess
New submitter spazmonkey writes: From a hidden GPS tracker polling your location every 4.5 minutes to JavaScript loaded from a random GitHub account, no SSL certificate pinning, and an in-app browser that silently strips cookie consent dialogs and paywalls from every page you visit, the new White House app seems to have a little bit of everything. A security researcher pulled the APK apart to discover the cybersecurity vulnerabilities. "The app is a React Native build using Expo SDK 54, with WordPress powering the backend through a custom REST API," reports Android Headlines. "That's pretty normal, as nearly 42% of all websites on the internet are powered by WordPress. But that's just the start; now the nightmare begins..." From the report: To start, the app has a full GPS tracking pipeline compiled in. Essentially, it's set to poll your location every 4.5 minutes in the foreground, and 9.5 minutes in the background. It's syncing latitude, longitude, accuracy, and timestamp data to OneSignal's servers. These location permissions aren't declared in the AndroidManifest, but they are hardcoded as runtime requests in the OneSignal SDK. Some have noted that the tracking only kicks in if the developer enables it server-side and the user grants permission, but it is there, ready to go. And it gets even stranger. Apparently, the app is loading JavaScript from a random person's GitHub site for YouTube embeds. Yes, you read that right, it's just loading JavaScript from a random GitHub site. So if that account ever gets compromised, arbitrary code could run inside the app's WebView. There's also no SSL certificate pinning, meaning that traffic can potentially be intercepted on compromised networks like sketchy public WiFi or corporate proxies. The app also injects JavaScript and CSS into every page you visit in the in-app browser. This strips away cookie consent dialogs, GDPR banners, login walls, and paywalls. There's also leftover dev artifacts in the production build, including a localhost URL to the Metro bundler.
Read more of this story at Slashdot.
- CO2 Levels In the Atmosphere Hit 'Depressing' New Record
Atmospheric carbon dioxide hit a new record in April, averaging about 431 parts per million at NOAA's Mauna Loa Observatory. That's up from under 320 ppm when the site began measurements in 1958. Scientific American reports: Greenhouse gases, such as carbon dioxide, are measured as a proportion of the total atmosphere. The numbers are presented as the number of molecules of a particular gas out of a million total molecules, or ppm. Climate scientist Zachary Labe of Climate Central, a nonprofit that researches climate change, says the new record is "depressing" but not unexpected. "It's just another sign that carbon dioxide continues to increase in our atmosphere as our planet continues to warm," he says. "For many climate scientists, this is just 'here it is again, another record in the wrong direction.'" Labe explains that the amount of CO2 in the atmosphere tends to peak in April each year as decaying plants release greenhouse gases after winter. Some of that CO2 gets reabsorbed by plants as they grow during the warmer months. But NOAA's data show a worrying trend, with the average monthly amount of CO2 steadily increasing. [...] Although the amount of CO2 in the atmosphere has continued to rise, there was a reduction in U.S. emissions in 2023 and 2024. That trend, however, was reversed in 2025, at least partially because of the increased electricity demand from artificial intelligence data centers. Still, Labe says there are reasons for optimism as the use of renewable energy sources such as solar and wind expands.
Read more of this story at Slashdot.
- Brockman Rebuts Musk's Take On Startup's History, Recounts Secret Work For Tesla
An anonymous reader quotes a report from CNBC: OpenAI President Greg Brockman concluded his testimony on Tuesday, where he largely rebutted Elon Musk's account of the early years of the startup and negotiations that occurred at the company. Brockman testified that he never made any commitments to Musk about the company's corporate structure, and he never heard anyone else make them. He emphasized that OpenAI is still governed by a nonprofit. "This entity remains a nonprofit," Brockman said, referring to the OpenAI foundation. "It is the best-resourced nonprofit in the world." [...] Brockman, who spoke from the witness stand in federal court in Oakland, California, over the course of two days, also revealed that Musk had enlisted several OpenAI employees to do months of free work for him at Tesla, Musk's electric vehicle company. That work mainly included efforts to overhaul the company's approach to developing self-driving technology as part of the Autopilot team there in 2017. During his two days on the stand, Brockman answered questions about his personal financial ambitions, his understanding of OpenAI's structure and Musk's involvement at the company, which they co-founded with other executives in 2015. In Musk's testimony last week, the Tesla and SpaceX CEO said that the time, money and resources he poured into OpenAI had been integral to the company's success. He repeatedly said that he helped recruit the company's top talent. Brockman said Tuesday that while Musk was helpful in convincing some employees to take the leap to join OpenAI, he was a polarizing figure for others. "Elon had a reputation of being an extremely hard driver," Brockman said. He added that "certain candidates were very attracted" by Musk's involvement at OpenAI, and that "certain candidates were very turned off." Musk testified last week that a former OpenAI researcher named Andrej Karpathy joined Tesla, but only after he had planned to leave the startup already. Brockman said that Musk, after he hired Karpathy, approached him with "an apology and a confession," about the hire, and that neither Musk nor Karpathy had told him the researcher planned to leave OpenAI before that. Musk was generally not very available for meetings and conversations, Brockman said, so he relied on employees, including Sam Teller and former OpenAI board member Shivon Zilis, as proxies. Brockman testified that open sourcing OpenAI's technology was "not a topic of conversation" during Musk's time with the nonprofit, despite Musk's claims that it was supposed to be central to the organization. He also described tense 2017 negotiations over a possible for-profit arm, saying Musk became angry when equity stakes were discussed. "He said Musk declined the proposal during an in-person meeting, then tore a painting of a Tesla Model 3 car off the wall, and began storming out of the room," reports CNBC. He also demanded to know when the cofounders would leave the company. Brockman further said Musk wanted control of OpenAI because he disliked situations where he lacked control, citing Zip2 and SolarCity as examples Musk had raised. He also testified that Musk partly wanted control to help fund his broader SpaceX ambition of building a "city on Mars." CNBC notes the trial will resume at 8:30 a.m. PT on Wednesday, with Shivon Zilis expected to testify. She is the mother of four of Musk's children and a former OpenAI board member. Recap:OpenAI President Discloses His Stake In the Company Is Worth $30 Billion (Day Five)Musk Concludes Testimony At OpenAI Trial (Day Four)Elon Musk Says OpenAI Betrayed Him, Clashes With Company's Attorney (Day Three) Musk Testifies OpenAI Was Created As Nonprofit To Counter Google (Day Two) Elon Musk and OpenAI CEO Sam Altman Head To Court (Day One)
Read more of this story at Slashdot.
- Apple Agrees To Pay iPhone Owners $250 Million For Not Delivering AI Siri
Apple has agreed to a proposed $250 million settlement over claims that it misled iPhone buyers about the availability of Apple Intelligence and its upgraded Siri features. The settlement would cover U.S. buyers of the iPhone 16 lineup and iPhone 15 Pro models between June 10, 2024, and March 29, 2025. The Verge reports: The settlement will resolve a 2025 lawsuit, alleging Apple's advertisements created a "clear and reasonable consumer expectation" that Apple Intelligence features would be available with the launch of the iPhone 16. The lawsuit claimed Apple's products "offered a significantly limited or entirely absent version of Apple Intelligence, misleading consumers about its actual utility and performance." Apple brought certain AI-powered features to the iPhone 16 weeks after its release, and delayed the launch of its more personalized Siri, which is now expected to arrive later this year. Last April, the National Advertising Division recommended that Apple "discontinue or modify" its "available now" claim for Apple Intelligence. Apple also pulled an iPhone 16 ad showing actor Bella Ramsey using the AI-upgraded Siri.
Read more of this story at Slashdot.
- Coinbase Lays Off Nearly 700 Workers In 'AI-Native' Restructuring
Coinbase is laying off about 700 workers, or 14% of its workforce, as CEO Brian Armstrong says the company is restructuring to become "lean, fast, and AI-native." Engadget reports: Armstrong claimed he'd seen engineers "use AI to ship in days what used to take a team weeks" and that non-technical teams in the company are "shipping production code," while Coinbase is automating many of its workflows. "All of this has led us to an inflection point, not just for Coinbase, but for every company," Armstrong wrote. "The biggest risk now is not taking action. We are adjusting early and deliberately to rebuild Coinbase to be lean, fast and AI-native. We need to return to the speed and focus of our startup founding, with AI at our core." An AI-driven restructuring is only one half of the equation for Coinbase, though. Armstrong wrote that while the company "is well-capitalized, has diversified revenue streams and is well-positioned to weather any storm," the crypto market is down. As such, Coinbase is attempting to become leaner and faster ahead of the next crypto cycle. The company is eliminating some management layers and organizing the business around "AI-native talent who can manage fleets of agents to drive outsized impact," Armstrong wrote. "We'll also be experimenting with reduced pod sizes, including 'one person teams' with engineers, designers and product managers all in one role." That sure sounds like an attempt to get workers to take on more responsibilities.
Read more of this story at Slashdot.
- Google DeepMind Workers Vote To Unionize Over Military AI Deals
An anonymous reader quotes a report from Wired: Employees at Google DeepMind in London have voted to unionize as part of a bid to block the AI lab from providing its technology to the US and Israeli militaries. In a letter addressed to Google's managing director for the UK and Ireland, Debbie Weinstein, the workers asked the company to recognize the Communication Workers Union and Unite the Union as joint representatives for DeepMind employees. "Fundamentally, the push for unionization is about holding Google to its own ethical standards on AI, how they monetize it, what the products do, and who they work with," John Chadfield, national officer for technology at the CWU, tells WIRED. "Through the process of unionization, workers are collectively in a much stronger place to put [demands] to an increasingly deaf management." [...] The DeepMind employee tells WIRED that if the staff succeeds in unionizing in the UK, they will likely demand that Google pulls out of its long-standing contract with the Israeli military, and seek greater transparency over how its AI products will be used, and some sort of assurance relating to layoffs made possible by automation. If Google does not engage, the letter states, the employees will ask an arbitration committee to compel the company to recognize the unions. Since the turn of the year, both Anthropic and OpenAI have announced large-scale expansions of their operations in London. CWU hopes the unionization effort at DeepMind will spur workers at those labs into similar action. "These conversations are happening," claims Chadfield. "The workers at other frontier labs have seen what Google DeepMind workers have done. They've come to us asking for help as well." The unionization push began in February 2025 after Alphabet removed a pledge from its AI ethics guidelines that had barred uses such as weapons development and surveillance. "A lot of people here bought into the Google DeepMind tagline 'to build AI responsibly to benefit humanity,'" the DeepMind employee told WIRED. "The direction of travel is to further militarization of the AI models we're building here."
Read more of this story at Slashdot.
- Moving To Mainframe Can Be Cheaper Than Sticking With VMware
Gartner says some VMware customers may find it cheaper to move certain Linux VM workloads to IBM mainframes than to adopt Broadcom's new VMware licensing, especially for fleets of hundreds of Linux VMs and mission-critical apps needing long-term stability. The Register reports: Speaking to The Register to discuss the analyst firm's mid-April publication, "The State of the IBM Mainframe in 2026," [Gartner Vice President Analyst Alessandro Galimberti] said some buyers in many fields are comparing mainframes to modern environments and deciding Big Blue's big iron comes out ahead. "I can build a multi-region cloud application, but things like data synchronization and high availability are things I need to build into application logic," he said. "The mainframe has that in the platform, which shields developers from complexity." He also thinks mainframes are ideally suited to workloads that need many years of transactional consistency and backward-compatibility. That said, Galimberti doesn't recommend the mainframe for all applications. He said mission-critical applications that are unlikely to change much for a decade are best-suited to the machines, as are Linux applications because the open source OS runs on IBM's hardware. IBM also offers the z/VM hypervisor, which he says can make Linux "even better and more enterprise-ready." Which is why Galimberti thinks IBM's ecosystem is attractive to VMware users, especially those who operate a fleet of 500 to 700 Linux VMs. [...] Committing to mainframes therefore means planning "to spend time negotiating price and renewal protections, rather than prioritizing the business value these solutions can deliver." Another downside is that mainframes pose clear lock-in risk, so users may hold back on useful customizations out of fear they make it harder to extricate themselves from the platform. Access to skills remains an issue, too, as kids these days mostly don't contemplate a career working with big iron. Galimberti sees more service providers investing in their mainframe programs, which might help. So does the availability of Linux.
Read more of this story at Slashdot.
- Kids Bypass Age Verification With Fake Moustaches
A new Internet Matters survey suggests the UK's Online Safety Act age checks are easy for many children to bypass. Reported workarounds include fake birthdays, borrowed IDs, video game characters, and even drawing on a fake mustache. The Register reports: The group surveyed over 1,000 UK children and their parents, and while it did report some positive effects from changes made under the OSA, many children saw age verification as an easy-to-bypass hurdle rather than something that kept them genuinely safe. A full 46 percent of children even said that age checks were easy to bypass, while just 17 percent said that they were difficult to fool. The methods kids use to fool age gates vary, but most are pretty simple: There's the classic use of a video game character to fool video selfie systems, while in other instances, children reported just entering a fake birthday or using someone else's ID card when that was required. The report even cites cases of children drawing a mustache on their faces to fool age detection filters. Seriously. While nearly half of UK kids say it's easy to bypass online age checks (and another 17 percent say it's neither hard nor easy), only 32 percent say they've actually bypassed them, according to Internet Matters. Like scoring some booze from "cool" parents, keeping age-gated content out of the hands of kids under the OSA is only as effective as parents let it be, and a quarter of them enable their kids' online delinquency. More specifically, Internet Matters found that a full 17 percent of parents admitted to actively helping their kids evade age checks, while an additional 9 percent simply turned a blind eye to it.
Read more of this story at Slashdot.
- US Government Warns of Severe CopyFail Bug Affecting Major Versions of Linux
An anonymous reader quotes a report from TechCrunch: A severe security vulnerability affecting almost every version of the Linux operating system has caught defenders off-guard and scrambling to patch after security researchers publicly released exploit code that allows attackers to take complete control of vulnerable systems. The U.S. government says the bug, dubbed "CopyFail," is now being exploited in the wild, meaning it's being actively used in malicious hacking campaigns. [...] Given the risk to the federal enterprise network, U.S. cybersecurity agency CISA has ordered all civilian federal agencies to patch any affected systems by May 15.
Read more of this story at Slashdot.

- Security: Why Linux Is Better Than Windows Or Mac OS
Linux is a free and open source operating system that was released in 1991 developed and released by Linus Torvalds. Since its release it has reached a user base that is greatly widespread worldwide. Linux users swear by the reliability and freedom that this operating system offers, especially when compared to its counterparts, windows and [0]
- Essential Software That Are Not Available On Linux OS
An operating system is essentially the most important component in a computer. It manages the different hardware and software components of a computer in the most effective way. There are different types of operating system and everything comes with their own set of programs and software. You cannot expect a Linux program to have all [0]
- Things You Never Knew About Your Operating System
The advent of computers has brought about a revolution in our daily life. From computers that were so huge to fit in a room, we have come a very long way to desktops and even palmtops. These machines have become our virtual lockers, and a life without these network machines have become unimaginable. Sending mails, [0]
- How To Fully Optimize Your Operating System
Computers and systems are tricky and complicated. If you lack a thorough knowledge or even basic knowledge of computers, you will often find yourself in a bind. You must understand that something as complicated as a computer requires constant care and constant cleaning up of junk files. Unless you put in the time to configure [0]
- The Top Problems With Major Operating Systems
There is no such system which does not give you any problems. Even if the system and the operating system of your system is easy to understand, there will be some times when certain problems will arise. Most of these problems are easy to handle and easy to get rid of. But you must be [0]
- 8 Benefits Of Linux OS
Linux is a small and a fast-growing operating system. However, we can’t term it as software yet. As discussed in the article about what can a Linux OS do Linux is a kernel. Now, kernels are used for software and programs. These kernels are used by the computer and can be used with various third-party software [0]
- Things Linux OS Can Do That Other OS Cant
What Is Linux OS? Linux, similar to U-bix is an operating system which can be used for various computers, hand held devices, embedded devices, etc. The reason why Linux operated system is preferred by many, is because it is easy to use and re-use. Linux based operating system is technically not an Operating System. Operating [0]
- Packagekit Interview
Packagekit aims to make the management of applications in the Linux and GNU systems. The main objective to remove the pains it takes to create a system. Along with this in an interview, Richard Hughes, the developer of Packagekit said that he aims to make the Linux systems just as powerful as the Windows or [0]
- What’s New in Ubuntu?
What Is Ubuntu? Ubuntu is open source software. It is useful for Linux based computers. The software is marketed by the Canonical Ltd., Ubuntu community. Ubuntu was first released in late October in 2004. The Ubuntu program uses Java, Python, C, C++ and C# programming languages. What Is New? The version 17.04 is now available here [0]
- Ext3 Reiserfs Xfs In Windows With Regards To Colinux
The problem with Windows is that there are various limitations to the computer and there is only so much you can do with it. You can access the Ext3 Reiserfs Xfs by using the coLinux tool. Download the tool from the official site or from the sourceforge site. Edit the connection to “TAP Win32 Adapter [0]

- The text mode lie: why modern TUIs are a nightmare for accessibility
There is a persistent misconception among sighted developers: if an application runs in a terminal, it is inherently accessible. The logic assumes that because there are no graphics, no complex DOM, and no WebGL canvases, the content is just raw ASCII text that a screen reader can easily parse. The reality is different. Most modern Text User Interfaces (TUIs) are often more hostile to accessibility than poorly coded graphical interfaces. The very tools designed to improve the Developer Experience (DX) in the terminal—frameworks like Ink (JS/React), Bubble Tea (Go), or tcell—are actively destroying the experience for blind users. ↫ Casey Reeves The core reason should be obvious: the command-line interface, at its core, is just a stream of data with the newest data at the bottom, linearly going back in time as you go up. Any screen reader can deal with this fairly easily, and while I personally have no need for such a tool, Ive heard from those that do that kernel-level screen readers are quite good at what they do. TUIs, or text-based user interfaces, made with modern frameworks are actually very different: theyre 2D grid of pixels, where every character cell is a pixel. abandons the temporal flow for a spatial layout.! It should become immediately obvious that screen readers wont really know what to do with this, and Reeves gives countless examples, but the short version is this: the cursor jumps all over the place with every screen update, which makes screen readers go nuts. Various older TUIs, made in a time well before these modern TUI frameworks came about, were designed in a much more terminal-friendly way, or give you options to hide the cursor to solve the problem that way. Irssi, for example, uses VT100 scrolling regions instead of redrawing the whole screen every time something changes. I had never really stopped to think about TUIs and screen readers, as is common among us sighted people. The problems Reeves describes seem to stem not so much from TUIs being inherently inaccessible, but from modern frameworks not actually making use of the terminals core feature set. I really hope this Reeves article shines a light on this problem, and that the people developing these modern TUIs start taking accessibility more seriously.
- Using duplicity to back up your FreeBSD desktop
Backing up in modern times, we’ve had ZFS snapshots and replication to make this task extremely easy. However, you may not have access to another ZFS endpoint for replication, need to diversify risk by using a non-ZFS tool for backup, or are simply using UFS2, living the old skool life. For these situations, my first recommendation is to lean on Tarsnap for its ease of use and simplicity, making restoration just as easy as backing up. But some situations call for a different approach. Maybe you have a strict firewall at your company that doesn’t allow Tarsnap data streams to egress from your corporate network, or you have internal/easy access to storage endpoints, such as S3-compatible object storage or a large-file storage location with SFTP access. When you are faced with the latter, the duplicity (sysutils/duplicity in ports) utility is available as an easily installable package onto your FreeBSD system. ↫ Jason Tubnor at the FreeBSD Foundation The rest of the article explains how to use duplicity on FreeBSD for the purpose described above.
- Testing MacOS on the Apple Network Server 2.0 ROMs
Earlier this year, Mac OS and Windows NT-capable ROMs were discovered for Apple’s unique AIX Network Server. Cameron Kaiser has since spent more time digging into just how capable these ROMs are, and has published another one of his detailed stories about his efforts. Well, thanks to Jeff Walther who generously built a few replica ROM SIMMs for me to test, we can now try the 2.0! MacOS ROMs on holmstock, our hard-working Apple Network Server 700 test rig (stockholm, my original ANS 500, is still officially a production unit). And there are some interesting things to report, especially when we pit the preproduction ROMs and this set head-to-head in MacBench, and even try booting Rhapsody on it. ↫ Cameron Kaiser A great read, as always.
- Windows gets a new Run dialog
With Windows being as old and long-running as it is, theres a ton of old and outdated bits and pieces lurking in every nook and cranny. I have always found these old relics fascinating, especially now that over the past few years, Microsoft has attempted to replace some of those bits and pieces with modern replacements (not always to great success, but thats another story). One of those parts of the UI thats been virtually unchanged since the release of Windows 95 is the Run dialog, but thats about to change: Microsoft has released a completely new Run dialog to early testers. Windows Run, also known as the Run dialog, is a surface that has been around for over 30 years. It has become a heavily relied upon tool for developers and advanced users alike. Users have decades of muscle memory where they hit Win+R, navigate through their Run history, and hit Enter to quickly access various paths and tools. We all have our favorite tool we launch there as well. For us, some of our favorites are wt (Windows Terminal), mstsc (Remote Desktop) and winword (Microsoft Word). But it’s more than jUsT a TeXt BoX tHaT rUnS tHiNgS. The Run dialog can handle navigating both local and network file paths as well. And everything it does, it does fast. Win+R opens the run dialog seemingly instantly. If we wanted to modernize the Run Dialog to fit the modern Windows 11 design style, we had to make sure it did everything just as well as before. We needed to maintain the same performance while also keeping the user interface minimal, just as Windows 95 intended. ↫ Clint Rutkas at the Microsoft Dev Blogs The new Run dialog looks like it belongs in Windows 11, which is a nice improvement, but the most important part is that they actually seem to have made it a little faster. Sure, they may have only shaved off a few milliseconds from its opening time, but considering virtually everything else theyve touched in Windows over the years got considerably slower, thats a good showing for Microsoft. The new feature theyve added is that by typing ~\, you can open your home directory. The one casualty is the browse button, which according to Microsofts data, literally nobody ever used. I know its just a small thing and in the end not even a remotely consequential one, but with an operating system as old and storied as Windows, replacing these ancient parts that millions of people rely on every day absolutely fascinates me. There must be a considerable amount of pressure on the people developing something like this new Run dialog, especially with Windows reputation being at one of its lowest points, so its good to see them being able to deliver. The new Run dialog is available today for testers, and if youre on the Windows Insider Experimental Channel, you can enable it in Settings > System > Advanced. Coincidentally, on my Windows 11 machine that I use for just one stupid video game, this Advanced page displays a loading spinner for five minutes and then just dies. Also, Notepad wont start (one time it showed this dialog), and using the terminal to load it causes the old Win32 version of Notepad to open after 5 minutes of waiting, which then hangs and crashes. People pay money for this.
- GNOME is good, actually
While Im normally a KDE user, I do keep close tabs on various other desktop environments, and install and set them up every now and then to see how theyre fairing, what improvements theyve made, and ultimately, if my preference for KDE is still warranted. This usually means setting up a nice OpenBSD installation for Xfce, Fedora for GNOME, and less often others for some of the more niche desktop environments. Since GNOME 50 was just released, guess whos time in the round is up? Since everybodys already made up their mind about their preferred desktop eons ago, with upsides and downsides debated far past their expiration date, Im not particularly interested in reviewing desktop environments or Linux distributions. However, after asking around on Fedi, it seemed there was quite a bit of interest in an article detailing how I set up GNOME, what changes I make to the defaults, which extensions I use, what tweaks I apply, and so on. Of course, everything described in this article is highly personal, and Im not arguing that this is the optimal way to tweak GNOME, that the extensions I use are the best ones, or that any visual modifications I make are better than whatever defaults GNOME uses. No, my goal with this article is twofold: one, to highlight that GNOME is a lot more configurable, extensible, and malleable than common wisdom on the internet would have you believe. Its not KDE or one of those cobbled-together tiling Wayland desktops, but its definitely not as rigid as you might think. And two, that GNOME is good, actually. Tools of the trade The first thing I do is install a few crucial tools that make it easier to modify and tweak GNOME. I really dislike lists in articles, but I will begrudgingly use one here: After installing all of these tools, the actual tweaking can commence. Visual tweaks I didnt use to like GNOMEs Adwaita visual style, but over the years, it started growing on me to the point where I dont actively dislike it anymore. With the arrival of libadwaita, it has also become effectively impossible to theme modern GNOME applications, so even if you do change to something else, many of your applications wont follow along. If consistency is something you care about, youll stick to Adwaita, but that leaves one problem unresolved: applications that still use GTK3. These applications will follow a much older version of Adwaita, making them stand out like eyesores among all the modern GTK4 stuff. Luckily, since GTK3 applications are still properly themable, this is easily fixed: just install the adw-gtk3 theme, either by hand, or through your distributions repositories. To enable it, first install the user themes extension through Extension Manager, and then enable the theme in GNOME Tweaks for Legacy Applications!. Any potential GTK3 applications you still use will now integrate nicely with modern libadwaita applications. The one part of GNOME I really do deeply dislike is its icon theme. I cant quite explain why I dislike this icon set so much, but it runs deep, so one of the very first things I do is replace the default GNOME icon set with my personal favourite, Qogir. This is a popular icon set, so its usually available in your distributions repositories, but I always install it from its GitHub page. Changing GNOMEs icon set is as simple as selecting it in GNOME Tweaks. You cant get much more personal taste than an icon set, and there are dozens of amazing sets to choose from in the Linux world. Changing them out and trying out new ones is stupidly easy, and its definitely worth looking at a few that might be more pleasing to you than GNOMEs (or KDEs) default. Lastly, I open Add Water and enable the amazing GNOME theme for LibreWolf. Add Water basically makes this as easy as flipping a switch, so theres no need to copy any files into your LibreWolf profile or whatever. The application also provides a few more small tweaks to fiddle with, like enabling standard tab widths so tabs dont grow and shrink as you close and open tabs, moving the bookmarks bar below the tab bar, and many more. Extensions Since the release of GNOME 3 in 2011, extensions have been the most capable way to modify GNOMEs look, behaviour, and feature set. As far as I can tell, while the extension framework is an official part of the GNOME Shell, the extensions themselves are all third-party and not part of a vanilla GNOME installation. By now, there are over 2800 listed extensions, but that number includes abandoned extensions so its hard to determine the actual number of currently-maintained ones. Whatever the actual number is, theres bound to be things in there youre going to want to use. Here are the extensions I have installed. Lets just start at the top and work our way down. I guess Im forced to do another list. There are countless more extensions to choose from, and youre definitely going to find things you never even thought could be useful. Miscellaneous tweaks Theres a few other things I modify. In GNOME Tweaks, I make it so that double-clicking a windows titlebar minimises it while right-clicking it lowers it; two features I picked up during my years as a BeOS user that I absolutely refuse to give up. I configure the dock from Dash to Dock so that it always remains on top and never hides itself, no matter the circumstances. In Settings, I disable virtual desktops entirely (I dont like virtual desktops), and I make sure tap-to-click is disabled (if Im on a laptop). GNOME is good, actually After making all of these changes, I feel quite comfortable using GNOME, at least on my laptop. Its a nice, coherent experience, and offers what is probably the most polished graphical user interface you can find on Linux, even if it isnt the most full-featured. The third-party application ecosystem, through modern
- How fast is a macOS VM, and how small could it be?
To assess how small a macOS VM could be, I ran the same VM of macOS 26.4.1 on progressively smaller CPU core and memory allocations, using my virtualiser Viable. The VM’s display window was set to a standard 1600 x 1000, and I ran Safari through its paces and performed some lightweight everyday tasks, including Storage analysis in Settings. Starting with 4 virtual cores and 8 GB vRAM, where the VM ran perfectly briskly with around 5 GB of memory used, I stepped down to 3 cores and 6 GB, to discover that memory usage fell to 3.9 GB and everything worked well. With just 2 cores and 4 GB of memory only 3.1 GB of that was used, and the VM continued to handle those lightweight tasks normally. ↫ Howard Oakley This is good news for people interested in the MacBook Neo who may also want to run a macOS virtual machine on it.
- Email is crazy
Email is like those creaking old Terminators from the ’70s which continue to function without complaining. Designed for a world that doesn’t exist anymore, it has optional encryption, no built-in auth, three⁺ retrofitted security layers bolted on top, an unstandardized filtering layer and many more quirks. Yet billions of emails arrive correctly every single day. Email is not elegant but nonetheless it is Lindy. In the new age of agentic AI, we can only expect it to metamorphose into another dimension. ↫ Saurabh Sam! Khawase The fact that email is as complicated as it is bad enough, but having it be so dominantly controlled by only a few large gatekeepers like Google and Microsoft surely isnt helping either. I feel like email is no longer really a technology individuals can actively partake in at every level; it feels much more like WhatsApp or iMessage or whatever in that we just get to send messages, and thats it. Running your own mail sever isnt only a complex endeavour, its also a continuous cat-and-mouse game with companies like Google and Microsoft to ensure you dont end up on some shitlist and your emails stop arriving. I settled on Fastmail as my email service, and it works quite well. Still, I would love to be able to just run my own email server, or have some of my far more capable friends run one for a small group of us, but its such a daunting and unpleasant effort few people seem to have the stomach and perseverance for it.
- The day I logged 1 in every 2000 public IPv4: visualizing the AI scraper DDoS
What if you run a few online services for you and your friends, like a small git instance and a grocery list service, but you get absolutely hammered by AI! scrapers? I cannot impress upon you, reader, that this is not only an attack that is coordinated, it is an attack that is distributed. I run a small set of services, basically only for me and my friends. I am not a hyperscaler, I am not a tech company, I am not even a small platform. I have a git forge where I put the shit I make, and a couple other services where me and my friends backup our files or write our grocery lists. I am not fucking Meta and I cannot scale the fuck up just because OpenAI or Anthropic or Meta or whoever is training a model that weeks wants to suck all the content out of my VPS ONCE MORE until it’s dry. ↫ lux at VulpineCitrus So how much traffic did the author of this piece, lux, get from AI! scraping bots? Within a time period of 24 hours, they were hammered by 2040670 unique IP addresses, 98% of which were IPv4 addresses, which means that 1 out of every 2000 publicly available IPv4 addresses were involved in the scraping. Together, they performed over 5 million requests. And just to reiterate: they were scraping a few very small, friends-only services run by some random person. This is absolutely insane. If, at this point in time, with everything that we know about just how deeply unethical every single aspect of AI! is, youre still using and promoting it, what is wrong with you? If youre so addicted to your AI! girlfriends unending stream of useless, forgettable sycophantic slop, despite being aware of the damage youre doing to those around you, theres something seriously wrong with you, and you desperately need professional help. You dont need any of this. The world doesnt need any of this. Nobody likes the slop AI! regurgitates, and nobody likes you for enabling it. Get help.
- Earliest 86-DOS and PC-DOS code released as open source
Microsoft is continuing its efforts to release early versions of DOS as open source, and today weve got a special one. We’re stoked today to showcase some newly available source code materials that provide an even earlier look into the development of PC-DOS 1.00, the first release of DOS for the IBM PC. A dedicated team of historians and preservationists led by Yufeng Gao and Rich Cini has worked to locate, scan, and transcribe the stack of DOS-era source listings from Tim Paterson, the author of DOS. The listings include sources to the 86-DOS 1.00 kernel, several development snapshots of the PC-DOS 1.00 kernel, and some well-known utilities such as CHKDSK. Not only were these assembler listings, but there were also listings of the assembler itself! This work offers rare insight into how MS-DOS/PC-DOS came to be, and how operating system development was done at the time, not as it was later reconstructed. ↫ Stacey Haffner and Scott Hanselman Its wild that the source code had to be transcribed from paper, including notes and changes. You can find more information about the process on Gao’s website and Cini’s website.
- Apple gives up on Vision Pro, disbands Vision Pro team
When Apple unveiled the Vision Pro, almost three (!) years ago, I concluded: If there’s one company that can convince people to spend $3500 to strap an isolating dystopian glowing robot mask onto their faces it’s Apple, but I still have a hard time believing this is what people want. ↫ Thom Holwerda at OSNews (quoting myself is weird) MacRumors Juli Clover, today: Apple has all but given up on the Vision Pro after the M5 model failed to revitalize interest in the device, MacRumors has learned. Apple updated the Vision Pro with a faster M5 chip and a more comfortable band in October 2025, but there were no other hardware changes, and consumers still werent interested. Apple has apparently stopped work on the Vision Pro and the Vision Pro team has been redistributed to other teams within Apple. Some former Vision Pro team members are working on Siri, which is not a surprise as Vision Pro chief Mike Rockwell has been leading the Siri team since March 2025. ↫ Juli Clover at MacRumors VR what the Vision Pro is, whether Apples marketing likes to say it or not has proven to be good for exactly two things: games and porn. The Vision Pro has neither. It was destined to be a flop from the start, as nobody wants to strap an uncomfortable computer to their face that does less than all of the other computers they already have, and what it does do, it does worse. I do wonder if this makes the Vision Pro the most expensive flop in human history. Has any company ever spent more on a product that failed this spectacularly?
- Apple wants to kill your Time Capsule, but they run NetBSD so they cant
It seems like Apple is finally going to remove support for AFP from macOS, twelve years after first moving from AFP to SMB for its default network file-sharing technology. This change shouldnt impact most people, as its highly unlikely youre using AFP for anything in 2026. Still, there is one small group of people to whom this change has an actual impact: owners of Apples Time Capsule devices. Time Capsules only support AFP and SMB1, and with SMB1 being removed from macOS ages ago, and now AFP being on the chopping block as well, macOS 27 would render your Time Capsule more or less unusable. Its important to note that the last Time Capsule sold by Apple, the fifth generation, was released in 2013, and the product line as a whole was discontinued in 2018. If you bought a Time Capsule in the twilight years of the lines availability, I think you have a genuine reason to be perturbed by Apple cutting you off from your product if you upgrade to macOS 27, but at least you have the option of keeping an older version of macOS around so you can keep interacting with your time Capsule. It still feels like a bit of a shitty move though, as those fifth generation models came with up to 3TB of storage, which can still serve as a solid NAS solution. Thank your lucky stars, then, that open source can, as usual, come to the rescue when proprietary software vendors do what they always do and screw over their customers. Did you know every generation of Time Capsule actually runs NetBSD, and that its trivially easy to add support for Samba 4 and SMB3 authentication to your Time Capsule, thereby extending its life expectancy considerably? TimeCapsuleSMB does exactly that. If the setup completes successfully, your Time Capsule will run its own Samba 4 server, advertise itself over Bonjour (show up automatically in the Network! folder on macOS), and accept authenticated SMB3 connections from macOS. You should then be able to open Finder, choose Connect to Server, and use a normal SMB URL instead of relying on Apple’s legacy stack. You should also be able to use the disk for Time Machine backups. ↫ TimeCapsuleSMB Its compatible with both NetBSD 4 and NetBSD 6-based Time Capsules, although youll need to run a single SMB activation command every time a NetBSD 4-based Time Capsule reboots. This will also disable any AFP and SMB1 support, but that is kind of moot since those are exactly the technologies that dont and wont work anymore once macOS 27 is released. The installation is also entirely reversible if, for whatever reason, you want to undo the addition of Samba 4. This whole saga is such an excellent example of why open source software protects users rights, by design.
- Dillo 3.3.0 released
Dillo is an amazing web browser for those of us who want their web browsing experience to be calmer and less flashing. Dillo also happens to be a very UNIX-y browser, and their latest release, 3.3.0, underlines that. A new dilloc program is now available to control Dillo from the command line or from a script. It searches for Dillo by the PID in the DILLO_PID environment variable or for a unique Dillo process if not set. ↫ Dillo 3.3.0 release notes You can use this program to control your Dillo instance, with basic commands like reloading the current URL, opening a new URL, and so on, but also things like dumping the current pages contents. I have a feeling more commands and features will be added in future releases, but for now, even the current set of commands can be helpful for scripting purposes. Im sure some of you who live and die in the terminal are already thinking of all the possibilities here. You can now also add page actions to the right-click context menu, so you can do things like reload a page with a Chrome curl impersonator to avoid certain JavaScript walls. This, too, is of course extensible. Dillo 3.3.0 also brings experimental support for building the browser with FLTK 1.4, and implemented a fix specifically to make OAuth work properly.
- Ubuntu is going to integrate AI!, but Canonical remains vague about the how and why
Ubuntu, being one of the more commercial Linux distributions, was always going to jump on the AI! bandwagon, and Jon Seager, Canonicals VP Engineering, published a blog post with more details. Throughout 2026 we’ll be working on enabling access to frontier AI for Ubuntu users in a way that is deliberate, secure, and aligned with our open source values. By focusing on the combination of education for our engineers, our existing knowledge of building resilient systems and our strengthening silicon partnerships, we will deliver efficient local inference, powerful accessibility features, and a context-aware OS that makes Ubuntu meaningfully more capable for the people who rely on it Ubuntu is not becoming an AI product, but it can become stronger with thoughtful AI integration. ↫ Jon Seager at Ubuntu Discourse The problem with this entire post is that, much like all other corporate communications about AI!, its all deceptively vague, open-ended, and weasely. Adjectives like focused!, principled!, thoughtful!, and tasteful! dont really mean anything, and leave everything open for basically every type of slop AI! feature under the sun. Their claims about open weights and open source models are also weakened by words like favour! and where possible!, again leaving the door wide open for basically any shady AI! companys models and features to find their way into your default Ubuntu installation. Theres also very little in terms of concrete plans and proposed features, leaving Ubuntu users in the dark about what, exactly, is going to be added to their operating system of choice during the remainder of the year. Theres mentions of improved text-to-speech/speech-to-text and text regurgitators, but thats about it. None of it feels particularly inspired or ground-breaking, and the veneer of open source, ethical model creation, and so on, is particularly thin this time around, even for Canonical. I dont really feel like I know a lot more about Canonicals AI! intentions for Ubuntu after reading this post than I did before, other than Ubuntu users might be able to generate text in their email client or whatever later this year. Is that really something anybody wants?
- If 64bit Windows 11 contains a copy of 32bit explorer.exe, could you run it as its shell?
Raymond Chen published a blog post about how a crappy uninstaller on Windows caused a mysterious spike in the number of Explorer (Windows graphical shell) crashes. It turns out the buggy uninstaller caused repeated crashes in the 32bit version of Explorer on 64bit systems, and hold on a minute. The how many bits on the what now? The 32-bit version of Explorer exists for backward compatibility with 32-bit programs. This is not the copy of Explorer that is handling your taskbar or desktop or File Explorer windows. So if the 32-bit Explorer is running on a 64-bit system, it’s because some other program is using it to do some dirty work. ↫ Raymond Chen at The Old New Thing So I had no idea that 64bit Windows included a copy of the 32bit Explorer for backwards compatibility. It obviously makes sense, but I just never stopped to think about it. This made me wonder though if you could go nuts and do something really dumb: could you somehow trick 64bit Windows into running this 32bit copy of Explorer as its shell? Youd be running 32bit Explorer on 64bit Windows using the 32bit WoW64 binaries where you just pulled the 32bit Explorer binary from, which seems like a really nonsensical thing to do. Since theres no longer any 32bit builds of Windows 11, you also cant just copy over the 32bit Explorer from a 32bit Windows 11 build and achieve the same goal that way, so youd really have to go digging around in WoW64 to get 32bit versions. I guess the answer to this question depends on just how complete this copy of 32bit Explorer really is, and if Windows has any defenses or triggers in place to prevent someone from doing something this uselessly stupid. Of course, theres no practical reason to do any of this and it makes very little sense, but it might be a fun hacking project. Most likely the Windows experts among you are wondering what kind of utterly deranged new designer drug Im on, but I was always told that sometimes, the dumbest questions can lead to the most interesting answers, so here we are.
- 8087 emulation on 8086 systems
Not too long ago I had a need and an opportunity to re-acquaint myself with the mechanism used for software emulation of the 8087 FPU on 8086/8088 machines. ↫ Michal Necasek Look, when a Michal Necasek article starts out like this, you know youre in for a learnin ol time. The 8087 was a floating-point coprocessor for the 8086 and 8088 processors, since back in those early days, processors did not include an integrated floating-point unit. It wouldnt be until the release of the 486DX, in 1989, that Intel would integrate an FPU inside the processor itself, negating the need for a separate chip and socket. Interestingly enough, Intel also released a cut-down version of the 486 with the FPU removed, the 486SX, for which an optional external FPU did exist.
- How hard is it to open a file?
Sebastian Wick has a great explanation of why opening files programmatically is a lot more complex and fraught with dangers than you might think it is. This issue was relevant for Wick as he is one of the lead developers of Flatpak, for which a number of security issues have recently been discovered, and it just so happens that many of these issues dealt with this very topic. The biggest security issue found was a complete sandbox escape, originating from the fact that flatpak run, the command-line tool to start a Flatpak application, accepted path strings, since flatpak run is assumed to be run by a trusted user. The problem lay in a D-Bus service sandboxed applications could use to create subsandboxes, and this service was built around, you guessed it, flatpak run. The issues in question, including this complete sandbox escape, have been addressed and fixed, but they highlight exactly the dangers that can come from opening files. This subsandboxing approach in Flatpak is built on assumptions from fifteen years ago, and times have changed since then. If youre a programmer who deals with opening files, you might want to take a look at your own code to see if similar issues exist.

- Canonical Unveils Ubuntu AI Strategy: Local Models, User Control, and Smarter Workflows
by George Whittaker Canonical has officially revealed its long-anticipated plans to bring artificial intelligence features into Ubuntu, marking a significant shift for one of the world’s most widely used Linux distributions. Rather than rushing into the AI wave, Canonical is taking a measured, privacy-focused approach, one that aims to enhance the operating system without compromising its open-source values.
The rollout is expected to take place gradually throughout 2026, with early features likely appearing in upcoming Ubuntu releases. A Gradual, Thoughtful AI Rollout Canonical isn’t positioning Ubuntu as an “AI-first” operating system. Instead, the company is introducing AI in stages, focusing on practical improvements rather than hype-driven features.
The plan follows a two-phase model: Implicit AI features: Enhancements running quietly in the background Explicit AI features: User-facing tools and workflows powered by AI This approach allows Ubuntu to evolve naturally, improving existing functionality before introducing more advanced capabilities. Local AI First, Not the Cloud One of the most important aspects of Canonical’s strategy is its emphasis on local AI processing, also known as on-device inference.
Instead of sending data to remote servers, Ubuntu will aim to: Run AI models directly on the user’s hardware Reduce reliance on cloud services Improve privacy and performance Canonical has made it clear that local inference will be the default, with cloud-based options available only when explicitly chosen by the user.
This aligns closely with the privacy expectations of Linux users, who often prefer greater control over their data. What AI Features Could Look Like Canonical has outlined several potential use cases for AI inside Ubuntu. These include: Accessibility Improvements AI will enhance tools like: Speech-to-text Text-to-speech Assistive technologies These features aim to make Ubuntu more inclusive and easier to use for a wider range of users. Smarter System Assistance Future AI features may help users: Troubleshoot system issues Interpret logs and error messages Automate repetitive tasks This could significantly lower the learning curve for new Linux users. Agent-Based Automation Canonical is also exploring “agentic” AI workflows, where AI can take actions on behalf of the user.
Examples include: Go to Full Article
- Thunderbird 150 Lands on Linux: Smarter Encryption, Better Tools, and a Polished Experience
by George Whittaker Mozilla has officially rolled out Thunderbird 150.0, the latest version of its open-source email client, bringing a mix of security-focused enhancements, usability upgrades, and workflow improvements for Linux and other platforms. Released in April 2026, this update continues Thunderbird’s steady evolution as a powerful desktop email solution.
For Linux users, Thunderbird 150 delivers meaningful updates that improve both everyday usability and advanced email handling, especially for encrypted communication. Stronger Support for Encrypted Email One of the standout improvements in Thunderbird 150 is how it handles encrypted messages.
Users can now: Search inside encrypted emails (OpenPGP and S/MIME) Generate “unobtrusive” OpenPGP signatures that appear cleaner to recipients These changes make encrypted communication far more practical, especially for users who rely on secure email for work or privacy-sensitive tasks. New Productivity and Workflow Features Thunderbird 150 introduces several small but impactful workflow improvements: A new Account Hub opens automatically on first launch, simplifying setup Recent Destinations in settings can now be sorted alphabetically Address book entries can be copied as vCard files A new custom accent color option allows interface personalization These updates make Thunderbird easier to configure and more flexible to use daily. Improved Built-In PDF Viewer Thunderbird’s integrated PDF viewer gets a useful upgrade: users can now reorder pages directly within the viewer.
This is particularly helpful for: Managing attachments without external tools Editing documents quickly before sending Streamlining email-based workflows Combined with ongoing security fixes, the PDF viewer becomes both more capable and safer. Calendar and Interface Enhancements Several improvements focus on usability and accessibility: Calendar views now support touchscreen scrolling Fixed issues with calendar layouts and navigation Better screen reader support and accessibility fixes General UI refinements across the application These changes contribute to a smoother, more consistent user experience across devices. Bug Fixes and Stability Improvements Thunderbird 150 also resolves a wide range of issues, including: Go to Full Article
- Linux Kernel 6.19 Reaches End of Life: Time to Move Forward
by George Whittaker The Linux kernel continues its fast-paced release cycle, and with that comes an important milestone: Linux kernel 6.19 has officially reached end of life (EOL). For users and distributions still running this branch, it’s now time to upgrade to a newer kernel version.
This isn’t unexpected, Linux 6.19 was never intended to be a long-term release, but it does serve as a reminder of how quickly non-LTS kernel branches move through their lifecycle. Official End of Support The final update in the 6.19 series, Linux 6.19.14, has been released and marked as the last maintenance version. Kernel maintainer Greg Kroah-Hartman confirmed that no further updates will follow, stating that the branch is now officially end-of-life.
On kernel.org, the 6.19 series is now listed as EOL, meaning it will no longer receive bug fixes or security patches. Why 6.19 Had a Short Lifespan Unlike some kernel releases, Linux 6.19 was not a long-term support (LTS) version. Short-lived kernel branches are typically supported for only a few months before being replaced by newer releases.
Linux follows a rapid development model: New major versions are released frequently Short-term branches receive limited updates Only selected kernels are designated as LTS for extended support Because of this, 6.19 was always meant to be a stepping stone rather than a long-term foundation. What Users Should Do Now With 6.19 no longer maintained, continuing to use it poses risks, especially in environments where security and stability matter.
Recommended upgrade paths include: Upgrade to Linux 7.0 The most direct path forward is the Linux 7.0 kernel series, which succeeds 6.19 and introduces new hardware support and ongoing fixes.
This is a good option for: Desktop users Rolling-release distributions Users who want the latest featuresSwitch to an LTS Kernel For production systems, servers, or long-term stability, moving to an LTS kernel is often the better choice.
Current LTS options include: Linux 6.18 LTS (supported until 2028) Linux 6.12 LTS (supported until 2028) Linux 6.6 LTS (supported until 2027) These versions receive ongoing security updates and are better suited for stable environments. Why EOL Matters When a kernel reaches end of life: Go to Full Article
- Archinstall 4.2 Shifts to Wayland-First Profiles, Leaving X.Org Behind
by George Whittaker The Arch Linux installer continues evolving alongside the broader Linux desktop ecosystem. With the release of Archinstall 4.2, a notable change has arrived: Wayland is now the default focus for graphical installation profiles, while traditional X.Org-based profiles have been removed or deprioritized.
This move reflects a wider transition happening across Linux, one that is gradually redefining how graphical environments are built and used. A Turning Point for Archinstall Archinstall, the official guided installer for Arch Linux, has steadily improved over time to make installation more accessible while still maintaining Arch’s minimalist philosophy.
With version 4.2, the installer now aligns more closely with modern desktop trends by emphasizing Wayland-based environments during setup, instead of offering traditional X.Org configurations as first-class options.
This doesn’t mean X.Org is completely gone from Arch Linux, but it does signal a clear shift in direction. Why Wayland Is Taking Over Wayland has been gaining traction for years as the successor to X.Org, offering a more streamlined and secure approach to rendering graphics on Linux.
Compared to X.Org, Wayland is designed to: Reduce complexity in the graphics stack Improve security by isolating applications Deliver smoother rendering and better performance Support modern display technologies like high-DPI and variable refresh rates As the Linux ecosystem evolves, many distributions and desktop environments are prioritizing Wayland as the default display protocol. What Changed in Archinstall 4.2 With this release, users installing Arch through Archinstall will notice: Wayland-based desktop environments and compositors are now the primary options X.Org-centric setups are no longer emphasized in guided profiles Installation workflows better reflect modern Linux defaults This simplifies the installation experience for new users, who no longer need to choose between legacy and modern display systems during setup. What About X.Org? While Archinstall is moving forward, X.Org itself is not disappearing overnight.
Many applications and workflows still rely on X11, and compatibility is maintained through XWayland, which allows X11 applications to run within Wayland sessions.
For advanced users, Arch still provides full flexibility: Go to Full Article
- OpenClaw in 2026: What It Is, Who’s Using It, and Whether Your Business Should Adopt It
by George Whittaker “probably the single most important release of software, probably ever.”
— Jensen Huang, CEO of NVIDIA
Wow! That’s a bold statement from one of the most influential figures in modern computing.
But is it true? Some people think so. Others think it’s hype. Most are somewhere in between, aware of OpenClaw, but not entirely sure what to make of it. Are people actually using it? Yes. Who’s using it? More than you might expect. Is it experimental, or is it already changing how work gets done? That depends on how it’s being applied. Is it more relevant for businesses or consumers right now? That’s one of the most important, and most misunderstood, questions.
This article breaks that down clearly: what OpenClaw is, how it works, who is using it today, and where it actually creates value.
What makes OpenClaw different isn’t just the technology, it’s where it fits. Most of the AI tools people are familiar with still require a human to take the next step. They assist, but they don’t execute. OpenClaw changes that dynamic by connecting decision-making directly to action. Once you understand that shift, the rest of the discussion, who’s using it, how it’s being deployed, and where it creates value, starts to make a lot more sense.
Top 10 Questions About OpenClaw What is OpenClaw?
OpenClaw is an open-source AI agent framework that enables large language models like Claude, GPT, and Gemini to execute real-world tasks across software systems, including APIs, files, and workflows.
What does OpenClaw actually do?
OpenClaw functions as an execution layer that allows AI systems to take actions, such as sending emails, updating CRM records, or running scripts, instead of only generating responses.
Do you need to be a developer to use OpenClaw?
No, but technical familiarity helps. Non-developers can use prebuilt workflows, while developers can customize and scale implementations more effectively.
Is OpenClaw more suited for business or consumer use?
OpenClaw is currently more suited for business and technical use cases where structured workflows exist. Consumer use is emerging but remains secondary.
How is OpenClaw different from ChatGPT or Claude?
ChatGPT and Claude generate outputs, while OpenClaw enables those outputs to trigger actions across connected systems.
Who created OpenClaw? Go to Full Article
- Linux Kernel Developers Adopt New Fuzzing Tools
by George Whittaker The Linux kernel development community is stepping up its security game once again. Developers, led by key maintainers like Greg Kroah-Hartman, are actively adopting new fuzzing tools to uncover bugs earlier and improve overall kernel reliability.
This move reflects a broader shift toward automated testing and AI-assisted development, as the kernel continues to grow in complexity and scale. What Is Fuzzing and Why It Matters Fuzzing is a software testing technique that feeds random or unexpected inputs into a program to trigger crashes or uncover vulnerabilities.
In the Linux kernel, fuzzing has become one of the most effective ways to detect: Memory corruption bugs Race conditions Privilege escalation flaws Edge-case failures in subsystems Modern fuzzers like Syzkaller have already discovered thousands of kernel bugs over the years, making them a cornerstone of Linux security testing. New Tools Enter the Scene Recently, kernel maintainers have begun experimenting with new fuzzing frameworks and tooling, including a project internally referred to as “clanker”, which has already been used to identify multiple issues across different kernel subsystems.
Early testing has uncovered bugs in areas such as: SMB/KSMBD networking code USB and HID subsystems Filesystems like F2FS Wireless and device drivers The speed at which these issues were discovered suggests that these new tools are significantly improving bug detection efficiency. AI and Smarter Fuzzing Techniques One of the most interesting developments is the growing role of AI and machine learning in fuzzing.
New research projects like KernelGPT use large language models to: Automatically generate system call sequences Improve test coverage Discover previously hidden execution paths These techniques can enhance traditional fuzzers by making them smarter about how they explore the kernel’s behavior.
Other advancements include: Better crash analysis and deduplication tools (like ECHO) Configuration-aware fuzzing to explore deeper kernel states Feedback-driven fuzzing loops for improved coverage Together, these innovations help developers focus on the most meaningful bugs rather than sifting through duplicate reports. Why This Shift Is Happening Now The Linux kernel is one of the most complex software projects in existence. With millions of lines of code and contributions from thousands of developers, manually catching every bug is nearly impossible. Go to Full Article
- GNOME 50 Reaches Arch Linux: A Leaner, Wayland-Only Future Arrives
by George Whittaker Arch Linux users are among the first to experience the latest GNOME desktop, as GNOME 50 has begun rolling out through Arch’s repositories. Thanks to Arch’s rolling-release model, new upstream software like GNOME arrives quickly, giving users early access to the newest features and architectural changes.
With GNOME 50, that includes one of the most significant shifts in the desktop’s history. A Major GNOME Milestone GNOME 50, officially released in March 2026 under the codename “Tokyo,” represents six months of development and refinement from the GNOME community.
Unlike some previous versions, this release focuses less on dramatic redesigns and more on strengthening the foundation of the desktop, improving performance, modernizing graphics handling, and simplifying long-standing complexities.
For Arch Linux users, that translates into a more streamlined and future-ready desktop environment. Goodbye X11, Hello Wayland-Only Desktop The headline change in GNOME 50 is the complete removal of X11 support from GNOME Shell and its window manager, Mutter.
After years of gradual transition: X11 sessions were first deprecated Then disabled by default And now fully removed in GNOME 50 This means GNOME now runs exclusively on Wayland, with legacy X11 applications handled through XWayland compatibility layers.
The result is a simpler, more modern graphics stack that reduces maintenance overhead and improves long-term performance and security. Improved Graphics and Display Handling GNOME 50 brings several key improvements to display and graphics performance: Variable Refresh Rate (VRR) enabled by default Better fractional scaling support Improved compatibility with NVIDIA drivers Enhanced HDR and color management These changes aim to deliver smoother animations, more responsive desktops, and better support for modern displays.
For gamers and users with high-refresh monitors, these upgrades are especially noticeable. Performance and Responsiveness Gains Beyond graphics, GNOME 50 includes multiple performance optimizations: Faster file handling in the Files (Nautilus) app Improved thumbnail generation Reduced stuttering in animations Better resource usage across the desktop These refinements make the desktop feel more responsive, particularly on systems with demanding workloads or multiple monitors. New Parental Controls and Accessibility Features GNOME 50 also expands its focus on usability and accessibility. Go to Full Article
- MX Linux Pushes Back Against Age Verification: A Stand for Privacy and Open Source Principles
by George Whittaker The MX Linux project has taken a firm stance in a growing controversy across the Linux ecosystem: mandatory age-verification requirements at the operating system level. In a recent update, the team made it clear, they have no intention of implementing such measures, citing concerns over privacy, practicality, and the core philosophy of open-source software.
As governments begin introducing laws that could require operating systems to collect user age data, MX Linux is joining a group of projects resisting the shift. What Sparked the Debate? The discussion around age verification stems from new legislation, particularly in regions like the United States and Brazil, that aims to protect minors online. These laws may require operating systems to: Collect user age or date of birth during setup Provide age-related data to applications Enable content filtering based on age categories At the same time, underlying Linux components such as systemd have already begun exploring technical changes, including storing birthdate fields in user records to support such requirements. MX Linux Says “No” to Age Verification In response, the MX Linux team has clearly rejected the idea of integrating age verification into their distribution. Their reasoning is rooted in several key concerns: User privacy: Collecting age data introduces sensitive personal information into systems that traditionally avoid such tracking Feasibility: Implementing consistent, secure age verification across a decentralized OS ecosystem is highly complex Philosophy: Open-source operating systems are not designed to act as data collectors or gatekeepers The developers emphasized that they do not want to burden users with intrusive requirements and instead encouraged concerned individuals to direct their efforts toward policymakers rather than Linux projects. A Broader Resistance in the Linux Community MX Linux is not alone. The Linux world is divided on how, or whether, to respond to these regulations.
Some projects are exploring compliance, while others are pushing back entirely. In fact, age verification laws have sparked: Strong debate among developers and maintainers Concerns about enforceability on open-source platforms New projects explicitly created to resist such requirements In some extreme cases, distributions have even restricted access in certain regions to avoid legal complications. Why This Matters At its core, this issue goes beyond a single feature, it raises fundamental questions about what an operating system should be.
Linux has long stood for: Go to Full Article
- LibreOffice Drives Europe’s Open Source Shift: A Growing Push for Digital Sovereignty
by George Whittaker LibreOffice is increasingly at the center of Europe’s push toward open-source adoption and digital independence. Backed by The Document Foundation, the widely used office suite is playing a key role in helping governments, institutions, and organizations reduce reliance on proprietary software while strengthening control over their digital infrastructure.
Across the European Union, this shift is no longer experimental, it’s becoming policy. A Broader Movement Toward Open Source Europe has been steadily moving toward open-source technologies for years, but recent developments show clear acceleration. Governments and public institutions are actively transitioning away from proprietary platforms, often citing concerns about vendor lock-in, cost, and data control.
According to recent industry data, European organizations are adopting open source faster than their U.S. counterparts, with vendor lock-in concerns cited as a major driver.
LibreOffice sits at the center of this trend as a mature, fully open-source alternative to traditional office suites. LibreOffice as a Strategic Tool LibreOffice isn’t just another productivity application, it has become a strategic component in Europe’s digital policy framework.
The software: Is fully open source and community-driven Supports open standards like OpenDocument Format (ODF) Allows governments to avoid dependency on specific vendors Enables long-term control over data and infrastructure These characteristics align closely with the European Union’s broader strategy to promote interoperability and transparency through open standards. Government Adoption Across Europe LibreOffice adoption is already happening at scale across multiple countries and sectors.
Examples include: Germany (Schleswig-Holstein): transitioning tens of thousands of government systems to Linux and LibreOffice Denmark: replacing Microsoft Office in public institutions as part of a broader digital sovereignty initiative France and Italy: deploying LibreOffice across ministries and defense organizations Spain and local governments: adopting LibreOffice to standardize workflows and reduce costs In some cases, migrations involve hundreds of thousands of systems, demonstrating that open-source office software is viable at national scale. Go to Full Article
- From Linux to Blockchain: The Infrastructure Behind Modern Financial Systems
by George Whittaker The modern internet is built on open systems. From the Linux kernel powering servers worldwide to the protocols that govern data exchange, much of today’s digital infrastructure is rooted in transparency, collaboration, and decentralization. These same principles are now influencing a new frontier: financial systems built on blockchain technology.
For developers and system architects familiar with Linux and open-source ecosystems, the rise of cryptocurrency is not just a financial trend, it is an extension of ideas that have been evolving for decades. Open-Source Foundations and Financial Innovation Linux has long demonstrated the power of decentralized development. Instead of relying on a single authority, it thrives through distributed contributions, peer review, and community-driven improvement.
Blockchain technology follows a similar model. Networks like Bitcoin operate on open protocols, where consensus is achieved through distributed nodes rather than centralized control. Every transaction is verified, recorded, and made transparent through cryptographic mechanisms.
For those who have spent years working within Linux environments, this architecture feels familiar. It reflects a shift away from trust-based systems toward verification-based systems. Understanding the Stack: Nodes, Protocols, and Interfaces At a technical level, cryptocurrency systems are composed of multiple layers. Full nodes maintain the blockchain, validating transactions and ensuring network integrity. Lightweight clients provide access to users without requiring full data replication. On top of this, exchanges and platforms act as interfaces that connect users to the underlying network.
For developers, interacting with these systems often involves APIs, command-line tools, and automation scripts, tools that are already integral to Linux workflows. Managing wallets, verifying transactions, and monitoring network activity can all be integrated into existing development environments. Go to Full Article
|