|
1825 Monetary Lane Suite #104 Carrollton, TX
Do a presentation at NTLUG.
What is the Linux Installation Project?
Real companies using Linux!
Not just for business anymore.
Providing ready to run platforms on Linux
|
Show Descriptions... (Show All)
(Two Column)

- Debian OpenJDK 17 Critical Cryptographic Failures Advisory DSA-6237-1
Several vulnerabilities have been discovered in the OpenJDK Java runtime, which may result in incorrect generation of cryptographic keys, denial of service, information disclosure, XEE/XEE attacks or incorrect validation of Kerberos credentials. For the oldstable distribution (bookworm), these problems have been fixed
- Debian DSA-6236-1 firefox-esr Critical Arbitrary Code Exec Issues
Multiple security issues have been found in the Mozilla Firefox web browser, which could potentially result in the execution of arbitrary code, information disclosure or sandbox escape. For the oldstable distribution (bookworm), these problems have been fixed in version 140.10.1esr-1~deb12u1.

- [$] LWN.net Weekly Edition for April 30, 2026
Inside this week's LWN.net Weekly Edition: Front: Famfs; Python packaging council; Zig concurrency; pages and folios; Strawberry music manager; 7.1 merge window. Briefs: GnuPG 2.5.19; Copy Fail; Plasma security; Fedora 44; Ubuntu 26.04; Niri 26.04; pip 26.1; RIP Seth Nickell; RIP Tomáš Kalibera; Quotes; ... Announcements: Newsletters, conferences, security updates, patches, and more.
- A security bug in AEAD sockets
Security analysis firm Xint has disclosed a security bug in the Linux kernelthat allows for arbitrary 4-byte writes to the page cache, and which has beenpresent since 2017.The vulnerability hasbeen fixed in mainline kernels. A proof-of-concept script demonstrates how to use the flaw to corrupt a setuidbinary, which works onmultiple distributions, by requesting an AEAD-encrypted socket from user spaceand splicing a particular payload into it.A supplemental blogpost gives more details about the discovery and remediation. A core primitive underlying this bug is splice(): it transfers data between filedescriptors and pipes without copying, passing page cache pages by reference.When a user splices a file into a pipe and then into an AF_ALG socket, thesocket's input scatterlist holds direct references to the kernel's cached pagesof that file. The pages are not duplicated; the scatterlist entries point at thesame physical pages that back every read(), mmap(), andexecve() of that file.
- [$] Python packaging council approved
The Python packaging world now has a formalgovernance council, of the form described in PEP 772 ("PackagingCouncil governance process"), which was approvedby the steering council on April 16. It has been over a yearsince the PEP was first proposed in February 2025 and it has undergonelengthy discussions in multiple postings to the Python discussion forum. Thepackaging council will have "broad authority over packaging standards,tools, and implementations"; it will consist of five members who willbe elected in a vote that is likely to come in June—after PyCon US 2026 is held mid-May.
- Security review of Plasma Login Manager (SUSE Security Team Blog)
SUSE's Security Team has published a detailedblog post on their recent review of the PlasmaLogin Manager version 6.6.2,which was forked from the SDDM displaymanager.
While most of the code remains thesame, the new upstream added a privilegedD-Bus helper calledplasmaloginauthhelper, which suffers from defense-in-depthsecurity issues.
[...] Based on the high severity of the defense-in-depth issuesshown in this report, our assessment is that there is effectively noseparation between root and the plasmalogin service user account.
At this time there is no bugfix available by upstream, but asecurity fix is planned for the next Plasma release on May 12. We havenot been involved in upstream's bugfix process so far and have noknowledge about the approach that will be taken to address the issuesfrom this report.
- Security updates for Wednesday
Security updates have been issued by AlmaLinux (firefox, gdk-pixbuf2, java-17-openjdk, libxml2, python3, python3.11, python3.12, sudo, and webkit2gtk3), Debian (dnsdist, node-tar, pdns, pdns-recursor, and policykit-1), Fedora (chromium, edk2, and vim), Oracle (firefox, gdk-pixbuf2, go-toolset:rhel8, libpng12, LibRaw, libxml2, python, python3, python3.11, python3.12, python3.12-wheel, vim, webkit2gtk3, xorg-x11-server, xorg-x11-server-Xwayland, yggdrasil, and yggdrasil-worker-package-manager), Red Hat (container-tools:rhel8, delve, git-lfs, go-rpm-macros, grafana, grafana-pcp, osbuild-composer, and rhc), SUSE (bouncycastle, clamav, container-suseconnect, dovecot22, erlang, firefox, fontforge, freerdp2, ghostscript, giflib, gnome-remote-desktop, go1.25, go1.26, google-guest-agent, haproxy, ignition, ImageMagick, kernel, libcap, libpng16, libraw, librsvg, mariadb, openexr, pocketbase, protobuf, python-Pillow, python-requests, qemu, rust1.94, sudo, tomcat, tomcat10, tomcat11, webkit2gtk3, and xen), and Ubuntu (dotnet10, dovecot, linux-nvidia-lowlatency, node-follow-redirects, openssh, packagekit, python-cryptography, python-tornado, ruby-rack-session, ujson, and wheel).
- Remembering Seth Nickell
LWN has received the sad news that Seth Nickell passed away, onApril 16, from his father, Eric Nickell:
Many of you knew Seth from his work in the GNOME Usability Project, but hisroots in that community trace back to his high school years. As a father ofa high school junior, I remember being terrified when he flashed the harddrive of a computer he purchased for himself with this weird "Linux" thing.And I was a bit awed by the college application essay he wrote about opensource and Linus Torvalds.
It was his interest in packet radio that drew him into working withthe Linux AX.25 HOWTOas a high schooler, and from there to his focus on making the Linuxdesktop work for everyone.
The family plans to share news of a memorial at a later time. Hewill be deeply missed.
- Fedora Linux 44 has been released
The Fedora Project has announcedthe release of Fedora Linux 44. There are "what's new"articles for FedoraWorkstation, FedoraKDE Plasma Desktop, and FedoraAtomic Desktops. The Fedora Asahi Remix for Apple Silicon Macs,based on Fedora 44, is alsoavailable. See the Fedora Spins page for a full list of alternative desktop options.
Fedora Linux 44 Workstation ships with the latest GNOME release,GNOME 50. This comes with a long list of refinements to your desktop,including everything from accessibility to color management and remotedesktop. Many of the applications that are installed by default onFedora Workstation have also seen improvements, from Document Viewerto File Manager and Calendar. To learn more about these and otherchanges, you can read the GNOME 50 release notes.
KDE Plasma Desktop: If you are a KDE user, you should also notice acouple of very obvious changes. Fedora KDE Plasma Desktop 44 is basedon the latest Plasma 6.6, which includes the new Plasma Login Managerand Plasma Setup to provide a more cohesive and integrated experiencefrom the moment the computer is powered on for the first time. Theinstallation process has been simplified, enabling you to easily setup Fedora KDE Plasma Desktop for a computer for a friend or a lovedone.
The releasenotes include important changes between Fedora 43 andFedora 44 for desktop users, developers, and system administrators.
- [$] Strawberry is ripe for managing music collections
There are dozens of music-player applications for Linux; the options rangefrom bare-bones programs that only play local files to full-blownmusic-management projects with a full suite of tools for managing (and playing)a music collection. Strawberryis in the latter category; it has a bumper crop of features, including smartplaylists, support for editing music metadata tags, the ability to organize musicfiles, and more.
- In Memoriam: Tomáš Kalibera
We have received the sad news that Tomáš Kalibera, a member of theR Project core team, haspassed awayafter a short illness.
A friend who knew him well wrote to me: he was very happy, andhis work fulfilled him. That is, perhaps, the best thing one cansay about a life in open source — that the work mattered, that itreached millions, and that the person who did it found meaning in it. Kalibera was mentioned in this 2019 article about Cprograms passing strings to Fortran subroutines. He will be greatlymissed.
- All FOSDEM 2026 videos are online
FOSDEM's organizers have announcedthat all of the video recordings "worth publishing" from FOSDEM 2026 are now available. Videos are linked from the individual schedule pages for the talks andthe fullschedule page. They are also available, organised by room, atvideo.fosdem.org/2026. LWN's coverageof talks from FOSDEM 2026 can be found on our conferenceindex.

- The Intel Lunar Lake CPU Performance Gains On Linux Over The Past Year
Recently I ran benchmarks looking at the Xe2 graphics performance gains on Intel Lunar Lake over the past year with what's shipped by Ubuntu and comparing against our original tests of the Lenovo ThinkPad X1 Carbon Gen 13 Aura Edition. With those Lunar Lake iGPU benchmarks out of the way, here is a look at how the Lunar Lake CPU performance has evolved on Linux since April 2025.
- Microsoft opens door to the past by releasing 86-DOS and PC-DOS 1.00
Back to a time when source repositories were printouts and commits were hand-written notesAntiques code show Microsoft has released the source for another of its relics. This time, it's 86-DOS 1.00 getting the open source treatment, and a whole lot more for retro enthusiasts.…
- MiciMike board converts Google Home Mini into local Home Assistant voice device
Crowd Supply recently featured the MiciMike Home Mini Drop-In PCB, an open hardware replacement for the first-generation Google Home Mini that enables fully local Home Assistant voice control. It installs without case modifications or soldering, reusing the original hardware. The platform is built around an Espressif ESP32-S3, based on a dual-core Xtensa LX7 CPU clocked […]

- New Sam Bankman-Fried Trial Would Be Huge Waste of Court's Time, Judge Says
A federal judge denied Sam Bankman-Fried's request for a new trial, calling his claims of DOJ witness intimidation "wildly conspiratorial" and unsupported by the record. Judge Lewis Kaplan said (PDF) the FTX founder's motion appeared tied to a pre-indictment plan to recast himself as a Republican victim of Biden's DOJ in hopes of gaining sympathy, leniency, or even a Trump pardon. Ars Technica reports: Bankman-Fried was sentenced to 25 years in prison in 2024 for "masterminding one of the largest financial frauds in American history," US District Judge Lewis Kaplan wrote in his order. He was convicted on all charges, including wire fraud, conspiracy to commit securities fraud, commodities fraud, and money laundering. There is already an appeal pending in another court, the judge noted. But Bankman-Fried filed a separate motion for a new trial, claiming that there were "newly discovered" witnesses and evidence that might have helped his defense, if Joe Biden's Department of Justice hadn't intimidated them into refusing to testify or, in one case, lying on the stand. He also asked for a new judge, wanting Kaplan to recuse himself. However, Kaplan pointed out that "none of the witnesses" were "newly discovered." And more concerningly, Bankman-Fried offered no evidence that the witnesses could prove the "wildly conspiratorial" theory the FTX founder raised, claiming that their absence at the trial was a "product of government threats and retaliation," the judge wrote. Bankman-Fried's theory is "entirely contradicted by the record," Kaplan said. He emphasized that granting Bankman-Fried's request "would be a large waste of judicial resources as it could require another judge to familiarize himself or herself with an extensive and complicated record." Additionally, all three witnesses that Bankman-Fried claimed could give crucial testimony in his defense were known to him throughout the trial, and he never sought to compel their testimony. And the "self-serving social-media posts" of one witness who now claims that he lied when testifying against Bankman-Fried -- "Ryan Salame, who pleaded guilty" -- must be met with "utmost suspicion," Kaplan said. "If one were to take Salame at his current word, he lied under oath when pleading guilty before this Court," Kaplan wrote. Even if taken seriously, "his out-of-court, unsworn statements could not come anywhere close to clearing the bar to warrant a new trial," Kaplan said, deeming Salame's credibility "highly questionable." Further, "even if these individuals had testified for Bankman-Fried, his protestations that one or more of them would have supported his claims that FTX was not insolvent and that his victims all were compensated fully in the bankruptcy proceedings are inaccurate or misleading," Kaplan concluded. In the order, Kaplan's frustration seems palpable, as there may have been no need for him to rule on the motion at all after Bankman-Fried requested to withdraw it. But the judge said the ruling was needed after Bankman-Fried waited to file his withdrawal request until after the DOJ and the court wasted time responding and reviewing filings, the judge said. Troublingly, Bankman-Fried's request to withdraw his request without prejudice would have allowed him to potentially request a new trial after the appeal ended. Based on the substance of the filing, that risked wasting future court resources, Kaplan determined. To prevent overburdening the justice system, Kaplan deemed it necessary to deny Bankman-Fried's motion and request for recusal, rather than allow him to withdraw the filing without prejudice.
Read more of this story at Slashdot.
- Ubuntu's AI Plans Have Linux Users Looking For a 'Kill Switch'
Canonical's plan to add AI features to Ubuntu has sparked pushback from users who are concerned it could follow Windows 11's AI-heavy direction. "After Canonical's announcement earlier this week that it's bringing AI features to Ubuntu, replies included requests for an AI 'kill switch' or a way to disable the upcoming features," reports The Verge. Canonical says it has no plans for a "global AI kill switch" but it will allow users to remove any AI features they don't want. From the report: In his original post, [Canonical's VP of engineering, Jon Seager] said the upcoming AI features will include accessibility tools like AI speech-to-text and text-to-speech, along with agentic AI features for tasks like troubleshooting and automation. Canonical is also encouraging its engineers to use AI more and plans to begin introducing AI features in Ubuntu "throughout the next year." In a follow-up comment, Seager clarified that, "my plan is to introduce AI-backed features as a 'preview' on a strictly opt-in basis in [Ubuntu version] 26.10. In subsequent releases, my plan is to have a step in the initial setup wizard that allows the user to choose whether or not they'd like the AI-native features enabled." Ultimately, he said, "All of these capabilities will be delivered as Snaps to the OS, layered on top of the existing Ubuntu stack. That means there will always be the option of removing those Snaps." Users who prefer to avoid AI entirely could switch to other distros like Linux Mint, Pop!_OS, or Zorin OS. "These distros have some similarities to Ubuntu, but may not necessarily adopt the new AI features Canonical is rolling out," adds The Verge.
Read more of this story at Slashdot.
- Joby Demos Its Air Taxi In NYC
Joby Aviation has completed demonstration flights of its electric air taxi over New York City, testing real routes between JFK and Manhattan helipads as it prepares for a future commercial service. The company says its eVTOL could turn a 60- to 120-minute airport trip into a flight of under 10 minutes, though commercial launch still depends on FAA certification. Electrive reports: To launch operations in New York City, Joby acquired Blade Urban Air Mobility last year. Blade already enables helicopter flights for affluent travelers between Manhattan and airports such as JFK or Newark in just five minutes, avoiding up to two hours of traffic and typical airport hassles. Joby aims to replace this service with quiet, electric air taxis as soon as possible, transitioning Blade's existing customers to the new technology. However, introducing a new aircraft into commercial service requires a years-long certification process, overseen in the US by the Federal Aviation Administration (FAA). Joby is now in the final phase of FAA certification. Following a series of demonstration flights in the San Francisco Bay Area, the company has tested its air taxi in New York City on real flight routes and under real-world conditions. During these tests, Joby demonstrated the acoustics and performance metrics critical for entering the urban air taxi market. During these demonstration flights, Joby's air taxi took off from John F. Kennedy International Airport (JFK) and landed at various helipads across the city, including Downtown Skyport and the helipads at West 30th Street and East 34th Street in Midtown, where Blade Air Mobility's premium passenger lounges are located. These locations represent some of the commercial routes Joby plans for New York [...]. Fun fact: Joby's eVTOL aircraft are over 100 to 1,000 times quieter than a conventional helicopter, operating at roughly 55-65 dB during takeoff and landing compared to 90+ dB for helicopters.
Read more of this story at Slashdot.
- Apple Gives Up On the Vision Pro After M5 Refresh Flop
MacRumors reports that Apple has effectively paused work on Vision Pro after the M5 refresh failed to revive demand. The team has reportedly been reassigned and the company is now shifting focus toward smart glasses instead. From the report: The Vision Pro has been criticized for its high price tag and its uncomfortable weight. The device is over 1.3 pounds, and even with the more comfortable Dual Knit Band that Apple added to redistribute weight, it continues to be hard to wear for long periods of time. The M5 chip added a 120Hz refresh rate, 10 percent more rendered pixels, and around 30 additional minutes of battery life, but the price tag stayed at $3,499, and it ended up not selling well. The Vision Pro has been unpopular since it first launched, and Apple only sold around 600,000 units in total. Insider sources told MacRumors that Apple has received an unusually high percentage of returns, far exceeding any other modern Apple product. [...] If Apple finds a way to create a much cheaper, more comfortable VR headset in the future, the Vision Pro line could be revived, but right now, the company has no plans to launch a new model. Apple has not discontinued the Vision Pro and is continuing to sell the M5 model. Instead of continuing to experiment with virtual reality, Apple is working on smart glasses that will eventually incorporate augmented reality capabilities, but the first version will be similar to the Ray-Ban Meta smart glasses with AI and no integrated display.
Read more of this story at Slashdot.
- California High-Speed Rail Price Tag Jumps To $231 Billion
Longtime Slashdot reader schwit1 writes: California's long-delayed high-speed rail project is now facing renewed scrutiny after state leaders revealed a dramatically higher price tag, now estimated at roughly $231 billion, nearly seven times the original $33 billion projection approved by voters in 2008. The revised figures have reignited talks in Sacramento over whether the project can realistically be completed, how long it will take, and whether the state can continue to fund it at this scale. Senator Strickland pointed to comments from Lou Thompson, former chair of the California High-Speed Rail Authority peer review group, who recently criticized the latest draft business plan. Thompson wrote that the 2026 draft plan "has reached a dead end," arguing that the project has drifted far from its original vision due to escalating costs, delays, and unfunded gaps. Under current projections, assuming funding and construction proceed as planned, service between San Francisco and Bakersfield could begin around 2033, while the full Los Angeles to San Francisco connection could extend to 2040.
Read more of this story at Slashdot.
- Colorado's Anti-Repair Bill Is Dead
An anonymous reader quotes a report from Wired: A controversial bill in Colorado that would have undone some repair protections in the state has failed. The bill had been the target of right-to-repair advocates, who saw it as a bellwether for how tech companies might try to undo repair legislation more broadly in the US. Colorado's landmark 2024 repair law, the Consumer Right to Repair Digital Electronic Equipment, went into effect in January 2026 and ensured access to tools and documentation people needed to modify and fix digital electronics such as phones, computers, and Wi-Fi routers. The new bill, SB26-090, would have carved out an exception to those repair protections for "critical infrastructure," a loosely defined term that repair advocates worried could be applied to just about any technology. SB26-090 was introduced during a Colorado Senate hearing on April 2 and was supported by lobbying efforts from companies such as Cisco and IBM. It passed that hearing unanimously. The bill then passed in the Colorado Senate on April 16. On Monday evening, the bill was discussed in a long, delayed hearing in the Colorado House's State, Civic, Military, and Veterans Affairs Committee. Dozens of supporters and detractors gave public comments. Finally, the bill was shot down in a 7-to-4 vote and classified as postponed indefinitely. "While we were making progress at chipping away at the momentum for it, we had still been losing," said Danny Katz, executive director of the local nonprofit consumer advocacy group CoPIRG. "So, we took nothing for granted, and I believe the incredible testimony from the broad range of cybersecurity experts, businesses, repair advocates, recyclers, and people who want the freedom to fix their stuff made a big difference."
Read more of this story at Slashdot.
- GitHub 'No Longer a Place For Serious Work', Says Hashicorp Co-Founder
Hashicorp co-founder Mitchell Hashimoto says GitHub's frequent outages have made it "no longer a place for serious work," prompting him to move his Ghostty terminal emulator project elsewhere after 18 years on the platform. The Register reports: "I've been angry about it. I've hurt people's feelings. I've been lashing out. Because GitHub is failing me, every single day, and it is personal. It is irrationally personal," he wrote. The reason for his ire is the service has become unreliable. "For the past month I've kept a journal where I put an 'X' next to every date where a GitHub outage has negatively impacted my ability to work," he wrote. "Almost every day has an 'X'. On the day I am writing this post, I've been unable to do any PR review for ~2 hours because there is a GitHub Actions outage." Hashimoto penned his post a few days before an April 28 incident that saw pull requests fail to complete due to an Elasticsearch SNAFU. Incidents like that mean Hashimoto has decided GitHub "is no longer a place for serious work if it just blocks you out for hours per day, every day." "It's not a fun place for me to be anymore," he lamented. "I want to be there but it doesn't want me to be there. I want to get work done and it doesn't want me to get work done. I want to ship software and it doesn't want me to ship software." The developer says he wants GitHub to improve, but "I also want to code. And I can't code with GitHub anymore. I'm sorry. After 18 years, I've got to go." He's open to a return if GitHub can deliver "real results and improvements, not words and promises." But for now, he's working to move Ghostty to another collaborative code locker. "We have a plan but I'm also very much still in discussions with multiple providers (both commercial and FOSS)," Hashimoto wrote. "It'll take us time to remove all of our dependencies on GitHub and we have a plan in place to do it as incrementally as possible." He's doing the equivalent of leaving a toothbrush at a former partner's house by leaving a read-only mirror of Ghostty on GitHub, and by keeping his personal projects on the Microsoft-owned service. But Hashimoto's moving his day job somewhere new. "Ghostty is where I, our maintainers, and our open source community are most impacted so that is the focus of this change. We'll see where it goes after that," he concluded.
Read more of this story at Slashdot.
- Should Schools Get Rid of Homework?
Tony Isaac shares a report from NPR: Federal survey data shows that the amount of math homework assigned to fourth and eighth grade students, in particular, has been steadily declining for the past decade. Some educators and parents say this is a good thing -- students shouldn't spend six or more hours a day at school and still have additional schoolwork to complete at home. But the research on homework is complicated. Some studies show that students who spend more time on homework perform better than their peers. For example, a longitudinal study released in 2021 of more than 6,000 students in Germany, Uruguay and the Netherlands found that lower-performing students who increased the amount of time they spent on math homework performed better in math, even one year later. Other studies, however, suggest homework has minimal outcomes on academic performance: A 1998 study of more than 700 U.S. students led by a researcher at Duke University found that more homework assigned in elementary grades had no significant effect on standardized test scores. The researchers did find small positive gains on class grades when they looked at both test scores and the proportion of homework students completed. More homework was also associated with negative attitudes about school for younger children in the study. "The best educators figured out a long time ago that we can control what we can control," and that's what happens during the school day, Superintendent Garrett said, not homework. "There has been a shift away from it naturally anyway, and I felt like this made it equitable across our entire school system." "The best argument for homework is that mathematical procedures require practice, and you don't want to waste classroom time on practice, so you send that home," said Tom Loveless, a researcher and former teacher who has studied homework. Ariel Taylor Smith, senior director of the Center for Policy and Action at the National Parents Union, said: "The thing they point to is that it's an equity issue, and not all parents have the same availability and ability to support their students. I would make the argument that if a kid is really far behind in school, that's an equity issue. They need the additional time to practice." Kids, she said, "need more practice ... Sometimes, you do have to practice the boring stuff, like math." "The interesting issue for folks to consider is not should there be more homework, but should there be better homework," said Joyce Epstein, who has studied homework and is the co-director of the Center on School, Family, and Community Partnerships at the Johns Hopkins University School of Education. "Better homework in math might be knowing the fact that kids don't have to be practicing for hours, 10 to 20 examples," when they could establish mastery in less time.
Read more of this story at Slashdot.
- Humanoid Robots Start Sorting Luggage In Tokyo Airport Test Amid Labor Shortage
An anonymous reader quotes a report from Ars Technica: Humanoid robots are getting a new gig as baggage handlers and cargo loaders at Tokyo's Haneda Airport -- part of a Japan Airlines experiment to address a human labor shortage as airport visitor numbers have surged in recent years. The demonstration, set to launch in May 2026, could eventually test humanoid robots in a wide range of airport tasks, including cleaning aircraft cabins and possibly handling ground support equipment such as baggage carts, according to a Japan Airlines press release. The trials are scheduled to run until 2028, which suggests that travelers flying into or out of Tokyo may spot some of the robots at work. [...] Japan Airlines is interested in testing whether humanoid robots powered by some of the latest AI models can adapt more readily to human work environments -- such as airports -- without requiring dedicated work stations or other significant workplace modifications. The airline's subsidiary, JAL Ground Service, has teamed up with GMO AI & Robotics Corporation to oversee the demonstration. The Japanese companies will test the G1 robot and Walker E robot from Chinese companies Unitree Robotics and UBTECH Robotics, according to The Asia Business Daily. Humanoid robots still typically cost tens of thousands of dollars per unit despite Chinese robotics manufacturers scaling up mass production, although the Unitree G1 robot costs as low as $13,500 for the baseline model. A new video from an apparently staged demonstration in an aircraft hangar shows one of the humanoid robots tottering up to a large, metal cargo container and making a vague pushing gesture. But the cargo container only begins to move once a human worker starts the conveyor belt to move the container toward the aircraft. Presumably, the robots will need to put in much more effective work if they're to prove as productive as human airport workers. Having robots working directly alongside humans will also introduce new safety considerations for airports like Haneda Airport, which is Japan's second-largest airport, with flights arriving approximately every two minutes. The first step in the pilot program will involve identifying which airport areas will be safest for humanoid robots.
Read more of this story at Slashdot.
- FDA Grants Quick Review For 3 Psychedelic Drug Trials
An anonymous reader quotes a report from NBC News: The Food and Drug Administration on Friday granted a quick review of three experimental psychedelic drugs meant to treat major depression and post-traumatic stress disorder. It's the latest move by the Trump administration signaling a shift in policy toward treatments that also give users a high -- coming a day after the Justice Department said it would ease restrictions on state-licensed medical marijuana. UK-based biotech company Compass Pathways said Friday it has received an expedited review for its experimental form of synthetic psilocybin for treatment-resistant depression. In a press release the company cited two large, phase 3 studies that had "generated positive data." Usona Institute, headquartered in Wisconsin, also said it's received a voucher for its work with psilocybin to treat major depressive disorder. In an email, a Usona spokesperson said the company expects the review process to last one to two months after it submits its application. "The voucher expedites the timeline only; it does not alter scientific or regulatory standards," the spokesperson wrote. New York-based Transcend Therapeutics has also been granted a priority review voucher for its experimental drug methylone for PTSD, Blake Mandell, the company's chief executive officer, said. "There's a battle still raging in their mind that we don't fully understand biochemically," FDA Commissioner Marty Makary said. "When you see something that looks promising for a community that is suffering with mental health illness, despair and suicidal ideation, you can't help but recognize that." Makary told NBC News that with the priority voucher program, the agency could potentially approve the first psychedelic drug by the end of summer.
Read more of this story at Slashdot.

- Microsoft lifts 2026 AI spend by $25 billion to cover component price rises
Will write checks for $190 billion and even those megabucks may not satisfy demand If you've felt the sting of surging hardware prices, Microsoft can sympathize because the company on Wednesday said it expects its 2026 capital expenditure will hit $190 billion, with $25 billion of that due to rising component costs.…
- Linux cryptographic code flaw offers fast route to root
Patches land for authencesn flaw enabling local privilege escalation Developers of major Linux distributions have begun shipping patches to address a local privilege escalation (LPE) vulnerability arising from a logic flaw.…
- Amazon chips no longer just a side dish, they're a $20B biz
The Trainium train keeps a-rollin' Amazon is now among the top three datacenter chip businesses in the world, as its semiconductor business surpassed a $20 billion annual run rate ... and it would be closer to $50 billion if it included itself among the customers, CEO Andy Jassy said during the company’s first quarter earnings call on Wednesday.…
- Researchers move in the right direction, develop powerful GPS interference alarm
ORNL says portable detector kit can separate real GPS signals from fake ones even at equal strength GPS spoofing, which sends fake satellite-like signals, and GPS jamming, which drowns receivers in noise, are increasingly serious problems. Researchers at Oak Ridge National Laboratory in Tennessee have created what they say is the most effective system yet for detecting GPS interference, which could help blunt such attacks.…
- Fedora 44 is out – countless versions of it
New sealed bootable container images and Stratis storage, too Fedora Linux 44 has arrived – in multiple formats and for several CPU families, including some new container formats and storage options.…
- Yet another experiment proves it's too damn simple to poison large language models
There is no 6 Nimmt! champion, but a $12 domain registration and one Wikipedia edit convinced several bots there was Unlike search engines that let you judge competing sources, search-backed AI chatbots can turn shaky web material into confident answers. Case in point: A security engineer convinced several bots that he was the reigning world champion of a popular German card game, even though no such championship exists.…

- Security: Why Linux Is Better Than Windows Or Mac OS
Linux is a free and open source operating system that was released in 1991 developed and released by Linus Torvalds. Since its release it has reached a user base that is greatly widespread worldwide. Linux users swear by the reliability and freedom that this operating system offers, especially when compared to its counterparts, windows and [0]
- Essential Software That Are Not Available On Linux OS
An operating system is essentially the most important component in a computer. It manages the different hardware and software components of a computer in the most effective way. There are different types of operating system and everything comes with their own set of programs and software. You cannot expect a Linux program to have all [0]
- Things You Never Knew About Your Operating System
The advent of computers has brought about a revolution in our daily life. From computers that were so huge to fit in a room, we have come a very long way to desktops and even palmtops. These machines have become our virtual lockers, and a life without these network machines have become unimaginable. Sending mails, [0]
- How To Fully Optimize Your Operating System
Computers and systems are tricky and complicated. If you lack a thorough knowledge or even basic knowledge of computers, you will often find yourself in a bind. You must understand that something as complicated as a computer requires constant care and constant cleaning up of junk files. Unless you put in the time to configure [0]
- The Top Problems With Major Operating Systems
There is no such system which does not give you any problems. Even if the system and the operating system of your system is easy to understand, there will be some times when certain problems will arise. Most of these problems are easy to handle and easy to get rid of. But you must be [0]
- 8 Benefits Of Linux OS
Linux is a small and a fast-growing operating system. However, we can’t term it as software yet. As discussed in the article about what can a Linux OS do Linux is a kernel. Now, kernels are used for software and programs. These kernels are used by the computer and can be used with various third-party software [0]
- Things Linux OS Can Do That Other OS Cant
What Is Linux OS? Linux, similar to U-bix is an operating system which can be used for various computers, hand held devices, embedded devices, etc. The reason why Linux operated system is preferred by many, is because it is easy to use and re-use. Linux based operating system is technically not an Operating System. Operating [0]
- Packagekit Interview
Packagekit aims to make the management of applications in the Linux and GNU systems. The main objective to remove the pains it takes to create a system. Along with this in an interview, Richard Hughes, the developer of Packagekit said that he aims to make the Linux systems just as powerful as the Windows or [0]
- What’s New in Ubuntu?
What Is Ubuntu? Ubuntu is open source software. It is useful for Linux based computers. The software is marketed by the Canonical Ltd., Ubuntu community. Ubuntu was first released in late October in 2004. The Ubuntu program uses Java, Python, C, C++ and C# programming languages. What Is New? The version 17.04 is now available here [0]
- Ext3 Reiserfs Xfs In Windows With Regards To Colinux
The problem with Windows is that there are various limitations to the computer and there is only so much you can do with it. You can access the Ext3 Reiserfs Xfs by using the coLinux tool. Download the tool from the official site or from the sourceforge site. Edit the connection to “TAP Win32 Adapter [0]

- Earliest 86-DOS and PC-DOS code released as open source
Microsoft is continuing its efforts to release early versions of DOS as open source, and today weve got a special one. We’re stoked today to showcase some newly available source code materials that provide an even earlier look into the development of PC-DOS 1.00, the first release of DOS for the IBM PC. A dedicated team of historians and preservationists led by Yufeng Gao and Rich Cini has worked to locate, scan, and transcribe the stack of DOS-era source listings from Tim Paterson, the author of DOS. The listings include sources to the 86-DOS 1.00 kernel, several development snapshots of the PC-DOS 1.00 kernel, and some well-known utilities such as CHKDSK. Not only were these assembler listings, but there were also listings of the assembler itself! This work offers rare insight into how MS-DOS/PC-DOS came to be, and how operating system development was done at the time, not as it was later reconstructed. ↫ Stacey Haffner and Scott Hanselman Its wild that the source code had to be transcribed from paper, including notes and changes. You can find more information about the process on Gao’s website and Cini’s website.
- Apple gives up on Vision Pro, disbands Vision Pro team
When Apple unveiled the Vision Pro, almost three (!) years ago, I concluded: If there’s one company that can convince people to spend $3500 to strap an isolating dystopian glowing robot mask onto their faces it’s Apple, but I still have a hard time believing this is what people want. ↫ Thom Holwerda at OSNews (quoting myself is weird) MacRumors Juli Clover, today: Apple has all but given up on the Vision Pro after the M5 model failed to revitalize interest in the device, MacRumors has learned. Apple updated the Vision Pro with a faster M5 chip and a more comfortable band in October 2025, but there were no other hardware changes, and consumers still werent interested. Apple has apparently stopped work on the Vision Pro and the Vision Pro team has been redistributed to other teams within Apple. Some former Vision Pro team members are working on Siri, which is not a surprise as Vision Pro chief Mike Rockwell has been leading the Siri team since March 2025. ↫ Juli Clover at MacRumors VR what the Vision Pro is, whether Apples marketing likes to say it or not has proven to be good for exactly two things: games and porn. The Vision Pro has neither. It was destined to be a flop from the start, as nobody wants to strap an uncomfortable computer to their face that does less than all of the other computers they already have, and what it does do, it does worse. I do wonder if this makes the Vision Pro the most expensive flop in human history. Has any company ever spent more on a product that failed this spectacularly?
- Apple wants to kill your Time Capsule, but they run NetBSD so they cant
It seems like Apple is finally going to remove support for AFP from macOS, twelve years after first moving from AFP to SMB for its default network file-sharing technology. This change shouldnt impact most people, as its highly unlikely youre using AFP for anything in 2026. Still, there is one small group of people to whom this change has an actual impact: owners of Apples Time Capsule devices. Time Capsules only support AFP and SMB1, and with SMB1 being removed from macOS ages ago, and now AFP being on the chopping block as well, macOS 27 would render your Time Capsule more or less unusable. Its important to note that the last Time Capsule sold by Apple, the fifth generation, was released in 2013, and the product line as a whole was discontinued in 2018. If you bought a Time Capsule in the twilight years of the lines availability, I think you have a genuine reason to be perturbed by Apple cutting you off from your product if you upgrade to macOS 27, but at least you have the option of keeping an older version of macOS around so you can keep interacting with your time Capsule. It still feels like a bit of a shitty move though, as those fifth generation models came with up to 3TB of storage, which can still serve as a solid NAS solution. Thank your lucky stars, then, that open source can, as usual, come to the rescue when proprietary software vendors do what they always do and screw over their customers. Did you know every generation of Time Capsule actually runs NetBSD, and that its trivially easy to add support for Samba 4 and SMB3 authentication to your Time Capsule, thereby extending its life expectancy considerably? TimeCapsuleSMB does exactly that. If the setup completes successfully, your Time Capsule will run its own Samba 4 server, advertise itself over Bonjour (show up automatically in the Network! folder on macOS), and accept authenticated SMB3 connections from macOS. You should then be able to open Finder, choose Connect to Server, and use a normal SMB URL instead of relying on Apple’s legacy stack. You should also be able to use the disk for Time Machine backups. ↫ TimeCapsuleSMB Its compatible with both NetBSD 4 and NetBSD 6-based Time Capsules, although youll need to run a single SMB activation command every time a NetBSD 4-based Time Capsule reboots. This will also disable any AFP and SMB1 support, but that is kind of moot since those are exactly the technologies that dont and wont work anymore once macOS 27 is released. The installation is also entirely reversible if, for whatever reason, you want to undo the addition of Samba 4. This whole saga is such an excellent example of why open source software protects users rights, by design.
- Dillo 3.3.0 released
Dillo is an amazing web browser for those of us who want their web browsing experience to be calmer and less flashing. Dillo also happens to be a very UNIX-y browser, and their latest release, 3.3.0, underlines that. A new dilloc program is now available to control Dillo from the command line or from a script. It searches for Dillo by the PID in the DILLO_PID environment variable or for a unique Dillo process if not set. ↫ Dillo 3.3.0 release notes You can use this program to control your Dillo instance, with basic commands like reloading the current URL, opening a new URL, and so on, but also things like dumping the current pages contents. I have a feeling more commands and features will be added in future releases, but for now, even the current set of commands can be helpful for scripting purposes. Im sure some of you who live and die in the terminal are already thinking of all the possibilities here. You can now also add page actions to the right-click context menu, so you can do things like reload a page with a Chrome curl impersonator to avoid certain JavaScript walls. This, too, is of course extensible. Dillo 3.3.0 also brings experimental support for building the browser with FLTK 1.4, and implemented a fix specifically to make OAuth work properly.
- Ubuntu is going to integrate AI!, but Canonical remains vague about the how and why
Ubuntu, being one of the more commercial Linux distributions, was always going to jump on the AI! bandwagon, and Jon Seager, Canonicals VP Engineering, published a blog post with more details. Throughout 2026 we’ll be working on enabling access to frontier AI for Ubuntu users in a way that is deliberate, secure, and aligned with our open source values. By focusing on the combination of education for our engineers, our existing knowledge of building resilient systems and our strengthening silicon partnerships, we will deliver efficient local inference, powerful accessibility features, and a context-aware OS that makes Ubuntu meaningfully more capable for the people who rely on it Ubuntu is not becoming an AI product, but it can become stronger with thoughtful AI integration. ↫ Jon Seager at Ubuntu Discourse The problem with this entire post is that, much like all other corporate communications about AI!, its all deceptively vague, open-ended, and weasely. Adjectives like focused!, principled!, thoughtful!, and tasteful! dont really mean anything, and leave everything open for basically every type of slop AI! feature under the sun. Their claims about open weights and open source models are also weakened by words like favour! and where possible!, again leaving the door wide open for basically any shady AI! companys models and features to find their way into your default Ubuntu installation. Theres also very little in terms of concrete plans and proposed features, leaving Ubuntu users in the dark about what, exactly, is going to be added to their operating system of choice during the remainder of the year. Theres mentions of improved text-to-speech/speech-to-text and text regurgitators, but thats about it. None of it feels particularly inspired or ground-breaking, and the veneer of open source, ethical model creation, and so on, is particularly thin this time around, even for Canonical. I dont really feel like I know a lot more about Canonicals AI! intentions for Ubuntu after reading this post than I did before, other than Ubuntu users might be able to generate text in their email client or whatever later this year. Is that really something anybody wants?
- If 64bit Windows 11 contains a copy of 32bit explorer.exe, could you run it as its shell?
Raymond Chen published a blog post about how a crappy uninstaller on Windows caused a mysterious spike in the number of Explorer (Windows graphical shell) crashes. It turns out the buggy uninstaller caused repeated crashes in the 32bit version of Explorer on 64bit systems, and hold on a minute. The how many bits on the what now? The 32-bit version of Explorer exists for backward compatibility with 32-bit programs. This is not the copy of Explorer that is handling your taskbar or desktop or File Explorer windows. So if the 32-bit Explorer is running on a 64-bit system, it’s because some other program is using it to do some dirty work. ↫ Raymond Chen at The Old New Thing So I had no idea that 64bit Windows included a copy of the 32bit Explorer for backwards compatibility. It obviously makes sense, but I just never stopped to think about it. This made me wonder though if you could go nuts and do something really dumb: could you somehow trick 64bit Windows into running this 32bit copy of Explorer as its shell? Youd be running 32bit Explorer on 64bit Windows using the 32bit WoW64 binaries where you just pulled the 32bit Explorer binary from, which seems like a really nonsensical thing to do. Since theres no longer any 32bit builds of Windows 11, you also cant just copy over the 32bit Explorer from a 32bit Windows 11 build and achieve the same goal that way, so youd really have to go digging around in WoW64 to get 32bit versions. I guess the answer to this question depends on just how complete this copy of 32bit Explorer really is, and if Windows has any defenses or triggers in place to prevent someone from doing something this uselessly stupid. Of course, theres no practical reason to do any of this and it makes very little sense, but it might be a fun hacking project. Most likely the Windows experts among you are wondering what kind of utterly deranged new designer drug Im on, but I was always told that sometimes, the dumbest questions can lead to the most interesting answers, so here we are.
- 8087 emulation on 8086 systems
Not too long ago I had a need and an opportunity to re-acquaint myself with the mechanism used for software emulation of the 8087 FPU on 8086/8088 machines. ↫ Michal Necasek Look, when a Michal Necasek article starts out like this, you know youre in for a learnin ol time. The 8087 was a floating-point coprocessor for the 8086 and 8088 processors, since back in those early days, processors did not include an integrated floating-point unit. It wouldnt be until the release of the 486DX, in 1989, that Intel would integrate an FPU inside the processor itself, negating the need for a separate chip and socket. Interestingly enough, Intel also released a cut-down version of the 486 with the FPU removed, the 486SX, for which an optional external FPU did exist.
- How hard is it to open a file?
Sebastian Wick has a great explanation of why opening files programmatically is a lot more complex and fraught with dangers than you might think it is. This issue was relevant for Wick as he is one of the lead developers of Flatpak, for which a number of security issues have recently been discovered, and it just so happens that many of these issues dealt with this very topic. The biggest security issue found was a complete sandbox escape, originating from the fact that flatpak run, the command-line tool to start a Flatpak application, accepted path strings, since flatpak run is assumed to be run by a trusted user. The problem lay in a D-Bus service sandboxed applications could use to create subsandboxes, and this service was built around, you guessed it, flatpak run. The issues in question, including this complete sandbox escape, have been addressed and fixed, but they highlight exactly the dangers that can come from opening files. This subsandboxing approach in Flatpak is built on assumptions from fifteen years ago, and times have changed since then. If youre a programmer who deals with opening files, you might want to take a look at your own code to see if similar issues exist.
- AI as a fascist artifact
In that reading „AI“ is a machine for the creation of epistemic injustice and the replacement of truth with what a tech elite wants it to be in order to control the population. This is a Fascist project that not so subtly aligns with Fascism’s totalitarian will to power and control as well as its reliance in replacing reasoning and debate with belief in power and the leader. ↫ Jürgen Geute The purpose of a system is what it does, and what AI! does is stunt users own abilities and development and concentrate power and wealth even further in the hands of a very small privileged few a privileged few who consistently espouse fascist ideology and promote and implement fascist ideas. Jürgen Geute lays it out in much more detail backed by solid references and concrete examples, but the conclusion is clear. And uncomfortable to many, as such conclusions always are.
- Ubuntu 26.04 LTS Resolute Raccoon released
Im not sure many OSNews readers still use Ubuntu as their operating system of choice, and from the release announcement of todays Ubuntu 26.04 its clear why thats the case. Resolute Raccoon builds on the resilience-focused improvements introduced in interim releases, with TPM-backed full-disk encryption, improved support for application permission prompting, Livepatch updates for Arm-based servers, and Rust-based utilities for enhanced memory safety. This release brings native support for industry-leading AI/ML toolkits like NVIDIA CUDA and AMD ROCm, making Ubuntu 26.04 LTS the ideal platform for AI development and production workloads. ↫ Canonical press release Its obvious where Canonicals focus lies with Ubuntu, and us desktop people who dont like AI! arent it. On top of all the AI! nonsense, this new version comes with all the latest versions of the various open source components that make up a Linux distribution, as well as a slew of Rust-based replacements for core CLI tools, like sudo-rs, uutils coreutils, and more. All the derivative release of Ubuntu, like Kubuntu, Xubuntu, and others, will also be updated over the coming days. If youre already running any of these, updating wont be a surprise to you.

- Canonical Unveils Ubuntu AI Strategy: Local Models, User Control, and Smarter Workflows
by George Whittaker Canonical has officially revealed its long-anticipated plans to bring artificial intelligence features into Ubuntu, marking a significant shift for one of the world’s most widely used Linux distributions. Rather than rushing into the AI wave, Canonical is taking a measured, privacy-focused approach, one that aims to enhance the operating system without compromising its open-source values.
The rollout is expected to take place gradually throughout 2026, with early features likely appearing in upcoming Ubuntu releases. A Gradual, Thoughtful AI Rollout Canonical isn’t positioning Ubuntu as an “AI-first” operating system. Instead, the company is introducing AI in stages, focusing on practical improvements rather than hype-driven features.
The plan follows a two-phase model: Implicit AI features: Enhancements running quietly in the background Explicit AI features: User-facing tools and workflows powered by AI This approach allows Ubuntu to evolve naturally, improving existing functionality before introducing more advanced capabilities. Local AI First, Not the Cloud One of the most important aspects of Canonical’s strategy is its emphasis on local AI processing, also known as on-device inference.
Instead of sending data to remote servers, Ubuntu will aim to: Run AI models directly on the user’s hardware Reduce reliance on cloud services Improve privacy and performance Canonical has made it clear that local inference will be the default, with cloud-based options available only when explicitly chosen by the user.
This aligns closely with the privacy expectations of Linux users, who often prefer greater control over their data. What AI Features Could Look Like Canonical has outlined several potential use cases for AI inside Ubuntu. These include: Accessibility Improvements AI will enhance tools like: Speech-to-text Text-to-speech Assistive technologies These features aim to make Ubuntu more inclusive and easier to use for a wider range of users. Smarter System Assistance Future AI features may help users: Troubleshoot system issues Interpret logs and error messages Automate repetitive tasks This could significantly lower the learning curve for new Linux users. Agent-Based Automation Canonical is also exploring “agentic” AI workflows, where AI can take actions on behalf of the user.
Examples include: Go to Full Article
- Thunderbird 150 Lands on Linux: Smarter Encryption, Better Tools, and a Polished Experience
by George Whittaker Mozilla has officially rolled out Thunderbird 150.0, the latest version of its open-source email client, bringing a mix of security-focused enhancements, usability upgrades, and workflow improvements for Linux and other platforms. Released in April 2026, this update continues Thunderbird’s steady evolution as a powerful desktop email solution.
For Linux users, Thunderbird 150 delivers meaningful updates that improve both everyday usability and advanced email handling, especially for encrypted communication. Stronger Support for Encrypted Email One of the standout improvements in Thunderbird 150 is how it handles encrypted messages.
Users can now: Search inside encrypted emails (OpenPGP and S/MIME) Generate “unobtrusive” OpenPGP signatures that appear cleaner to recipients These changes make encrypted communication far more practical, especially for users who rely on secure email for work or privacy-sensitive tasks. New Productivity and Workflow Features Thunderbird 150 introduces several small but impactful workflow improvements: A new Account Hub opens automatically on first launch, simplifying setup Recent Destinations in settings can now be sorted alphabetically Address book entries can be copied as vCard files A new custom accent color option allows interface personalization These updates make Thunderbird easier to configure and more flexible to use daily. Improved Built-In PDF Viewer Thunderbird’s integrated PDF viewer gets a useful upgrade: users can now reorder pages directly within the viewer.
This is particularly helpful for: Managing attachments without external tools Editing documents quickly before sending Streamlining email-based workflows Combined with ongoing security fixes, the PDF viewer becomes both more capable and safer. Calendar and Interface Enhancements Several improvements focus on usability and accessibility: Calendar views now support touchscreen scrolling Fixed issues with calendar layouts and navigation Better screen reader support and accessibility fixes General UI refinements across the application These changes contribute to a smoother, more consistent user experience across devices. Bug Fixes and Stability Improvements Thunderbird 150 also resolves a wide range of issues, including: Go to Full Article
- Linux Kernel 6.19 Reaches End of Life: Time to Move Forward
by George Whittaker The Linux kernel continues its fast-paced release cycle, and with that comes an important milestone: Linux kernel 6.19 has officially reached end of life (EOL). For users and distributions still running this branch, it’s now time to upgrade to a newer kernel version.
This isn’t unexpected, Linux 6.19 was never intended to be a long-term release, but it does serve as a reminder of how quickly non-LTS kernel branches move through their lifecycle. Official End of Support The final update in the 6.19 series, Linux 6.19.14, has been released and marked as the last maintenance version. Kernel maintainer Greg Kroah-Hartman confirmed that no further updates will follow, stating that the branch is now officially end-of-life.
On kernel.org, the 6.19 series is now listed as EOL, meaning it will no longer receive bug fixes or security patches. Why 6.19 Had a Short Lifespan Unlike some kernel releases, Linux 6.19 was not a long-term support (LTS) version. Short-lived kernel branches are typically supported for only a few months before being replaced by newer releases.
Linux follows a rapid development model: New major versions are released frequently Short-term branches receive limited updates Only selected kernels are designated as LTS for extended support Because of this, 6.19 was always meant to be a stepping stone rather than a long-term foundation. What Users Should Do Now With 6.19 no longer maintained, continuing to use it poses risks, especially in environments where security and stability matter.
Recommended upgrade paths include: Upgrade to Linux 7.0 The most direct path forward is the Linux 7.0 kernel series, which succeeds 6.19 and introduces new hardware support and ongoing fixes.
This is a good option for: Desktop users Rolling-release distributions Users who want the latest featuresSwitch to an LTS Kernel For production systems, servers, or long-term stability, moving to an LTS kernel is often the better choice.
Current LTS options include: Linux 6.18 LTS (supported until 2028) Linux 6.12 LTS (supported until 2028) Linux 6.6 LTS (supported until 2027) These versions receive ongoing security updates and are better suited for stable environments. Why EOL Matters When a kernel reaches end of life: Go to Full Article
- Archinstall 4.2 Shifts to Wayland-First Profiles, Leaving X.Org Behind
by George Whittaker The Arch Linux installer continues evolving alongside the broader Linux desktop ecosystem. With the release of Archinstall 4.2, a notable change has arrived: Wayland is now the default focus for graphical installation profiles, while traditional X.Org-based profiles have been removed or deprioritized.
This move reflects a wider transition happening across Linux, one that is gradually redefining how graphical environments are built and used. A Turning Point for Archinstall Archinstall, the official guided installer for Arch Linux, has steadily improved over time to make installation more accessible while still maintaining Arch’s minimalist philosophy.
With version 4.2, the installer now aligns more closely with modern desktop trends by emphasizing Wayland-based environments during setup, instead of offering traditional X.Org configurations as first-class options.
This doesn’t mean X.Org is completely gone from Arch Linux, but it does signal a clear shift in direction. Why Wayland Is Taking Over Wayland has been gaining traction for years as the successor to X.Org, offering a more streamlined and secure approach to rendering graphics on Linux.
Compared to X.Org, Wayland is designed to: Reduce complexity in the graphics stack Improve security by isolating applications Deliver smoother rendering and better performance Support modern display technologies like high-DPI and variable refresh rates As the Linux ecosystem evolves, many distributions and desktop environments are prioritizing Wayland as the default display protocol. What Changed in Archinstall 4.2 With this release, users installing Arch through Archinstall will notice: Wayland-based desktop environments and compositors are now the primary options X.Org-centric setups are no longer emphasized in guided profiles Installation workflows better reflect modern Linux defaults This simplifies the installation experience for new users, who no longer need to choose between legacy and modern display systems during setup. What About X.Org? While Archinstall is moving forward, X.Org itself is not disappearing overnight.
Many applications and workflows still rely on X11, and compatibility is maintained through XWayland, which allows X11 applications to run within Wayland sessions.
For advanced users, Arch still provides full flexibility: Go to Full Article
- OpenClaw in 2026: What It Is, Who’s Using It, and Whether Your Business Should Adopt It
by George Whittaker “probably the single most important release of software, probably ever.”
— Jensen Huang, CEO of NVIDIA
Wow! That’s a bold statement from one of the most influential figures in modern computing.
But is it true? Some people think so. Others think it’s hype. Most are somewhere in between, aware of OpenClaw, but not entirely sure what to make of it. Are people actually using it? Yes. Who’s using it? More than you might expect. Is it experimental, or is it already changing how work gets done? That depends on how it’s being applied. Is it more relevant for businesses or consumers right now? That’s one of the most important, and most misunderstood, questions.
This article breaks that down clearly: what OpenClaw is, how it works, who is using it today, and where it actually creates value.
What makes OpenClaw different isn’t just the technology, it’s where it fits. Most of the AI tools people are familiar with still require a human to take the next step. They assist, but they don’t execute. OpenClaw changes that dynamic by connecting decision-making directly to action. Once you understand that shift, the rest of the discussion, who’s using it, how it’s being deployed, and where it creates value, starts to make a lot more sense.
Top 10 Questions About OpenClaw What is OpenClaw?
OpenClaw is an open-source AI agent framework that enables large language models like Claude, GPT, and Gemini to execute real-world tasks across software systems, including APIs, files, and workflows.
What does OpenClaw actually do?
OpenClaw functions as an execution layer that allows AI systems to take actions, such as sending emails, updating CRM records, or running scripts, instead of only generating responses.
Do you need to be a developer to use OpenClaw?
No, but technical familiarity helps. Non-developers can use prebuilt workflows, while developers can customize and scale implementations more effectively.
Is OpenClaw more suited for business or consumer use?
OpenClaw is currently more suited for business and technical use cases where structured workflows exist. Consumer use is emerging but remains secondary.
How is OpenClaw different from ChatGPT or Claude?
ChatGPT and Claude generate outputs, while OpenClaw enables those outputs to trigger actions across connected systems.
Who created OpenClaw? Go to Full Article
- Linux Kernel Developers Adopt New Fuzzing Tools
by George Whittaker The Linux kernel development community is stepping up its security game once again. Developers, led by key maintainers like Greg Kroah-Hartman, are actively adopting new fuzzing tools to uncover bugs earlier and improve overall kernel reliability.
This move reflects a broader shift toward automated testing and AI-assisted development, as the kernel continues to grow in complexity and scale. What Is Fuzzing and Why It Matters Fuzzing is a software testing technique that feeds random or unexpected inputs into a program to trigger crashes or uncover vulnerabilities.
In the Linux kernel, fuzzing has become one of the most effective ways to detect: Memory corruption bugs Race conditions Privilege escalation flaws Edge-case failures in subsystems Modern fuzzers like Syzkaller have already discovered thousands of kernel bugs over the years, making them a cornerstone of Linux security testing. New Tools Enter the Scene Recently, kernel maintainers have begun experimenting with new fuzzing frameworks and tooling, including a project internally referred to as “clanker”, which has already been used to identify multiple issues across different kernel subsystems.
Early testing has uncovered bugs in areas such as: SMB/KSMBD networking code USB and HID subsystems Filesystems like F2FS Wireless and device drivers The speed at which these issues were discovered suggests that these new tools are significantly improving bug detection efficiency. AI and Smarter Fuzzing Techniques One of the most interesting developments is the growing role of AI and machine learning in fuzzing.
New research projects like KernelGPT use large language models to: Automatically generate system call sequences Improve test coverage Discover previously hidden execution paths These techniques can enhance traditional fuzzers by making them smarter about how they explore the kernel’s behavior.
Other advancements include: Better crash analysis and deduplication tools (like ECHO) Configuration-aware fuzzing to explore deeper kernel states Feedback-driven fuzzing loops for improved coverage Together, these innovations help developers focus on the most meaningful bugs rather than sifting through duplicate reports. Why This Shift Is Happening Now The Linux kernel is one of the most complex software projects in existence. With millions of lines of code and contributions from thousands of developers, manually catching every bug is nearly impossible. Go to Full Article
- GNOME 50 Reaches Arch Linux: A Leaner, Wayland-Only Future Arrives
by George Whittaker Arch Linux users are among the first to experience the latest GNOME desktop, as GNOME 50 has begun rolling out through Arch’s repositories. Thanks to Arch’s rolling-release model, new upstream software like GNOME arrives quickly, giving users early access to the newest features and architectural changes.
With GNOME 50, that includes one of the most significant shifts in the desktop’s history. A Major GNOME Milestone GNOME 50, officially released in March 2026 under the codename “Tokyo,” represents six months of development and refinement from the GNOME community.
Unlike some previous versions, this release focuses less on dramatic redesigns and more on strengthening the foundation of the desktop, improving performance, modernizing graphics handling, and simplifying long-standing complexities.
For Arch Linux users, that translates into a more streamlined and future-ready desktop environment. Goodbye X11, Hello Wayland-Only Desktop The headline change in GNOME 50 is the complete removal of X11 support from GNOME Shell and its window manager, Mutter.
After years of gradual transition: X11 sessions were first deprecated Then disabled by default And now fully removed in GNOME 50 This means GNOME now runs exclusively on Wayland, with legacy X11 applications handled through XWayland compatibility layers.
The result is a simpler, more modern graphics stack that reduces maintenance overhead and improves long-term performance and security. Improved Graphics and Display Handling GNOME 50 brings several key improvements to display and graphics performance: Variable Refresh Rate (VRR) enabled by default Better fractional scaling support Improved compatibility with NVIDIA drivers Enhanced HDR and color management These changes aim to deliver smoother animations, more responsive desktops, and better support for modern displays.
For gamers and users with high-refresh monitors, these upgrades are especially noticeable. Performance and Responsiveness Gains Beyond graphics, GNOME 50 includes multiple performance optimizations: Faster file handling in the Files (Nautilus) app Improved thumbnail generation Reduced stuttering in animations Better resource usage across the desktop These refinements make the desktop feel more responsive, particularly on systems with demanding workloads or multiple monitors. New Parental Controls and Accessibility Features GNOME 50 also expands its focus on usability and accessibility. Go to Full Article
- MX Linux Pushes Back Against Age Verification: A Stand for Privacy and Open Source Principles
by George Whittaker The MX Linux project has taken a firm stance in a growing controversy across the Linux ecosystem: mandatory age-verification requirements at the operating system level. In a recent update, the team made it clear, they have no intention of implementing such measures, citing concerns over privacy, practicality, and the core philosophy of open-source software.
As governments begin introducing laws that could require operating systems to collect user age data, MX Linux is joining a group of projects resisting the shift. What Sparked the Debate? The discussion around age verification stems from new legislation, particularly in regions like the United States and Brazil, that aims to protect minors online. These laws may require operating systems to: Collect user age or date of birth during setup Provide age-related data to applications Enable content filtering based on age categories At the same time, underlying Linux components such as systemd have already begun exploring technical changes, including storing birthdate fields in user records to support such requirements. MX Linux Says “No” to Age Verification In response, the MX Linux team has clearly rejected the idea of integrating age verification into their distribution. Their reasoning is rooted in several key concerns: User privacy: Collecting age data introduces sensitive personal information into systems that traditionally avoid such tracking Feasibility: Implementing consistent, secure age verification across a decentralized OS ecosystem is highly complex Philosophy: Open-source operating systems are not designed to act as data collectors or gatekeepers The developers emphasized that they do not want to burden users with intrusive requirements and instead encouraged concerned individuals to direct their efforts toward policymakers rather than Linux projects. A Broader Resistance in the Linux Community MX Linux is not alone. The Linux world is divided on how, or whether, to respond to these regulations.
Some projects are exploring compliance, while others are pushing back entirely. In fact, age verification laws have sparked: Strong debate among developers and maintainers Concerns about enforceability on open-source platforms New projects explicitly created to resist such requirements In some extreme cases, distributions have even restricted access in certain regions to avoid legal complications. Why This Matters At its core, this issue goes beyond a single feature, it raises fundamental questions about what an operating system should be.
Linux has long stood for: Go to Full Article
- LibreOffice Drives Europe’s Open Source Shift: A Growing Push for Digital Sovereignty
by George Whittaker LibreOffice is increasingly at the center of Europe’s push toward open-source adoption and digital independence. Backed by The Document Foundation, the widely used office suite is playing a key role in helping governments, institutions, and organizations reduce reliance on proprietary software while strengthening control over their digital infrastructure.
Across the European Union, this shift is no longer experimental, it’s becoming policy. A Broader Movement Toward Open Source Europe has been steadily moving toward open-source technologies for years, but recent developments show clear acceleration. Governments and public institutions are actively transitioning away from proprietary platforms, often citing concerns about vendor lock-in, cost, and data control.
According to recent industry data, European organizations are adopting open source faster than their U.S. counterparts, with vendor lock-in concerns cited as a major driver.
LibreOffice sits at the center of this trend as a mature, fully open-source alternative to traditional office suites. LibreOffice as a Strategic Tool LibreOffice isn’t just another productivity application, it has become a strategic component in Europe’s digital policy framework.
The software: Is fully open source and community-driven Supports open standards like OpenDocument Format (ODF) Allows governments to avoid dependency on specific vendors Enables long-term control over data and infrastructure These characteristics align closely with the European Union’s broader strategy to promote interoperability and transparency through open standards. Government Adoption Across Europe LibreOffice adoption is already happening at scale across multiple countries and sectors.
Examples include: Germany (Schleswig-Holstein): transitioning tens of thousands of government systems to Linux and LibreOffice Denmark: replacing Microsoft Office in public institutions as part of a broader digital sovereignty initiative France and Italy: deploying LibreOffice across ministries and defense organizations Spain and local governments: adopting LibreOffice to standardize workflows and reduce costs In some cases, migrations involve hundreds of thousands of systems, demonstrating that open-source office software is viable at national scale. Go to Full Article
- From Linux to Blockchain: The Infrastructure Behind Modern Financial Systems
by George Whittaker The modern internet is built on open systems. From the Linux kernel powering servers worldwide to the protocols that govern data exchange, much of today’s digital infrastructure is rooted in transparency, collaboration, and decentralization. These same principles are now influencing a new frontier: financial systems built on blockchain technology.
For developers and system architects familiar with Linux and open-source ecosystems, the rise of cryptocurrency is not just a financial trend, it is an extension of ideas that have been evolving for decades. Open-Source Foundations and Financial Innovation Linux has long demonstrated the power of decentralized development. Instead of relying on a single authority, it thrives through distributed contributions, peer review, and community-driven improvement.
Blockchain technology follows a similar model. Networks like Bitcoin operate on open protocols, where consensus is achieved through distributed nodes rather than centralized control. Every transaction is verified, recorded, and made transparent through cryptographic mechanisms.
For those who have spent years working within Linux environments, this architecture feels familiar. It reflects a shift away from trust-based systems toward verification-based systems. Understanding the Stack: Nodes, Protocols, and Interfaces At a technical level, cryptocurrency systems are composed of multiple layers. Full nodes maintain the blockchain, validating transactions and ensuring network integrity. Lightweight clients provide access to users without requiring full data replication. On top of this, exchanges and platforms act as interfaces that connect users to the underlying network.
For developers, interacting with these systems often involves APIs, command-line tools, and automation scripts, tools that are already integral to Linux workflows. Managing wallets, verifying transactions, and monitoring network activity can all be integrated into existing development environments. Go to Full Article
|