Recent Changes - Search:
NTLUG

Linux is free.
Life is good.

Linux Training
10am on Meeting Days!

1825 Monetary Lane Suite #104 Carrollton, TX

Do a presentation at NTLUG.

What is the Linux Installation Project?

Real companies using Linux!

Not just for business anymore.

Providing ready to run platforms on Linux

Show Descriptions... (Show All) (Two Column)

LinuxSecurity - Security Advisories







LWN.net

  • [$] Gccrs after libcore
    Despite its increasing popularity, the Rust programming language is stillsupported by a single compiler, the LLVM-based rustc. At the 2025 GNU ToolsCauldron, Pierre-Emmanuel Patry said that a lot of people are waitingfor a GCC-based Rust compiler before jumping into the language. Patry, whois working on just that compiler (known as "gccrs"), provided an update onthe status of that project and what is coming next.


  • [$] Last-minute /boot boost for Fedora 43
    Sudden increases in the size of Fedora's initramfsfiles have prompted the project to fast-track a proposal to increasethe default size of the /boot partition for new installs ofFedora 43 and later. The project has also walked back a fewchanges that have contributed to larger initramfs files, but theever-increasing size of firmware means that the need for more room isunavoidable. The Fedora Engineering Steering Council (FESCo) hasapproved a last-minute changejust before the final freeze for Fedora 43 to increase thedefault size of the /boot partition from 1GB to 2GB; thiswill leave plenty of space for kernels and initramfs images if a useris installing from scratch, but it is of no help for users upgradingfrom Fedora 42.


  • Ubuntu 25.10 released
    Ubuntu25.10, "Questing Quokka", has been released. This release includesLinux 6.17, GNOME 49, GCC 15, Python 3.13.7,Rust 1.85, and more. This release also features Rust-basedimplementations of sudo and coreutils; LWN covered the switch to theRust-based tools in March. The 25.10 version of Ubuntu flavorsEdubuntu, Kubuntu, Lubuntu, Ubuntu Budgie, Ubuntu Cinnamon, UbuntuKylin, Ubuntu MATE, Ubuntu Studio, and Xubuntu have alsobeen released.



  • Security updates for Thursday
    Security updates have been issued by AlmaLinux (gnutls, kernel, kernel-rt, and open-vm-tools), Debian (chromium, python-django, and redis), Fedora (chromium, insight, mirrorlist-server, oci-seccomp-bpf-hook, rust-maxminddb, rust-prometheus, rust-prometheus_exporter, rust-protobuf, rust-protobuf-codegen, rust-protobuf-parse, rust-protobuf-support, turbo-attack, and yarnpkg), Oracle (iputils, kernel, open-vm-tools, redis, and valkey), Red Hat (perl-File-Find-Rule and perl-File-Find-Rule-Perl), SUSE (expat, ImageMagick, matrix-synapse, python-xmltodict, redis, redis7, and valkey), and Ubuntu (fort-validator and imagemagick).


  • [$] LWN.net Weekly Edition for October 9, 2025
    Inside this week's LWN.net Weekly Edition:
    Front: Kernel Rust features; systemd v258, part 2; Cauldron kernel hackers; BPF for GNU tools; 6.18 merge window, part 1; Lifetime-end pointer zapping; Robot Operating System. Briefs: OpenSSH 10.1; Firefox profiles; Python 3.14; U-Boot v2025.10; FSF presidency; Quotes; ... Announcements: Newsletters, conferences, security updates, patches, and more.


  • Better profile management coming to Firefox
    Firefox has long had support for multiple profilesto store personal information such as bookmarks, passwords, and userpreferences. However, Firefox did not make profiles particularlydiscoverable or easy to manage. That is about to change; Mozilla hasannouncedthat it is launching a profile-management feature that will make iteasier to create and switch between profiles. According to the supportpage for the feature, it will be rolled out to users graduallybeginning on October 14.



  • [$] Upcoming Rust language features for kernel development
    TheRust for Linux project has been good for Rust, Tyler Mandry, one of theco-leads of Rust's language-design team, said. Hegave a talk atKangrejos 2025 covering upcoming Rust language features and thankingthe Rust for Linux developers for helping drive them forward. Afterward, Benno Lossin and Xiangfei Dingwent into more detail about their work on the three most important languagefeatures for kernel development: field projections, in-place initialization, and arbitrary self types.


  • Security updates for Wednesday
    Security updates have been issued by Fedora (apptainer, civetweb, mod_http2, openssl, pandoc, and pandoc-cli), Oracle (kernel), Red Hat (gstreamer1-plugins-bad-free, iputils, kernel, open-vm-tools, and podman), SUSE (cairo, firefox, ghostscript, gimp, gstreamer-plugins-rs, libxslt, logback, openssl-1_0_0, openssl-1_1, python-xmltodict, and rubygem-puma), and Ubuntu (gst-plugins-base1.0, linux-aws-6.8, linux-aws-fips, linux-azure, linux-azure-nvidia, linux-gke, linux-nvidia-tegra-igx, and linux-raspi).


  • Python 3.14.0 released
    Version3.14.0 of the Python language has been released. There are a lot ofchanges this time around, including official support for free threading, template string literals, and much more; seethe announcement for details.


  • [$] Progress on defeating lifetime-end pointer zapping
    Paul McKenney gave a remote presentation atKangrejos 2025 following up on thetalk he gave last year about thelifetime-end-pointer-zapping problem: certain common patterns for multithreaded code aretechnically undefined behavior, and changes to the C and C++ specificationswill be needed to correct that. Those changes could also impact code that usesunsafe Rust, such as the kernel's Rust bindings. Progress on the problem has been slow,but McKenney believes that a solution is near at hand.


LXer Linux News


  • FEX 2510 Brings More Optimizations For x86_64 Binaries On AArch64
    FEX 2510 is out as the newest release of this open-source emulator for running x86/x86_64 applications on ARM64 (AArch64) Linux devices. Making FEX all the more popular is its continued ability for running Wine/Proton for handling Windows games on ARM64 Linux...










Error: It's not possible to reach RSS file http://services.digg.com/2.0/story.getTopNews?type=rss&topic=technology ...

Slashdot

  • Apple and Google Reluctantly Comply With Texas Age Verification Law
    An anonymous reader quotes a report from Ars Technica: Apple yesterday announced a plan to comply with a Texas age verification law and warned that changes required by the law will reduce privacy for app users. "Beginning January 1, 2026, a new state law in Texas -- SB2420 -- introduces age assurance requirements for app marketplaces and developers," Apple said yesterday in a post for developers. "While we share the goal of strengthening kids' online safety, we are concerned that SB2420 impacts the privacy of users by requiring the collection of sensitive, personally identifiable information to download any app, even if a user simply wants to check the weather or sports scores." The Texas App Store Accountability Act requires app stores to verify users' ages and imposes restrictions on those under 18. Apple said that developers will have "to adopt new capabilities and modify behavior within their apps to meet their obligations under the law." Apple's post noted that similar laws will take effect later in 2026 in Utah and Louisiana. Google also recently announced plans for complying with the three state laws and said the new requirements reduce user privacy. "While we have user privacy and trust concerns with these new verification laws, Google Play is designing APIs, systems, and tools to help you meet your obligations," Google told developers in an undated post. The Utah law is scheduled to take effect May 7, 2026, while the Louisiana law will take effect July 1, 2026. The Texas, Utah, and Louisiana "laws impose significant new requirements on many apps that may need to provide age appropriate experiences to users in these states," Google said. "These requirements include ingesting users' age ranges and parental approval status for significant changes from app stores and notifying app stores of significant changes."


    Read more of this story at Slashdot.


  • Intel's Open Source Future in Question as Exec Says He's Done Carrying the Competition
    An anonymous reader shares a report: Over the years, Intel has established itself as a paragon of the open source community, but that could soon change under the x86 giant's new leadership. Speaking to press and analysts at Intel's Tech Tour in Arizona last week, Kevork Kechichian, who now leads Intel's datacenter biz, believes it's time to rethink what Chipzilla contributes to the open source community. "We have probably the largest footprint on open source out there from an infrastructure standpoint," he said during his opening keynote. "We need to find a balance where we use that as an advantage to Intel and not let everyone else take it and run with it." In other words, the company needs to ensure that its competitors don't benefit more from Intel's open source contributions than it does. Speaking with El Reg during a press event in Arizona last week, Kechichian emphasized that the company has no intention of abandoning the open source community. "Our intention is never to leave open source," he said. "There are lots of people benefiting from the huge investment that Intel put in there." "We're just going to figure out how we can get more out of that [Intel's open source contributions] versus everyone else using our investments," he added.


    Read more of this story at Slashdot.


  • He Was Expected To Get Alzheimer's 25 Years Ago. Why Hasn't He?
    Doug Whitney carries a genetic mutation that guaranteed he would develop Alzheimer's disease in his late forties or early fifties. His mother and nine of her thirteen siblings died from the disease. His oldest brother died at 45. The mutation has decimated his family for generations. Whitney is now 76 and remains cognitively healthy. The New York Times has a fascinating long read on Whitney and things happening around him. Scientists at Washington University School of Medicine in St. Louis have studied Whitney for 14 years. They extract his cerebrospinal fluid and conduct brain scans during his periodic visits from Washington State. His brain contains heavy amyloid deposits but almost no tau tangles in regions associated with dementia. Tau accumulation correlates directly with cognitive decline. Whitney accumulated tau only in his left occipital lobe, an area that does not play a major role in Alzheimer's. Researchers identified several possibly protective factors in Whitney's biology. His immune system produces a lower inflammatory response than other mutation carriers. He has unusually high levels of heat shock proteins, which prevent proteins from misfolding. Scientists believe his decade working in Navy engine rooms at temperatures reaching 110 degrees may have driven this accumulation. He also carries three gene variants his afflicted relatives lack. His son Brian inherited the mutation and remains asymptomatic at 43. Brian received anti-amyloid drugs in clinical trials. Researchers published their findings on Whitney in Nature Medicine. They described the study as a call for other scientists to help solve the case.


    Read more of this story at Slashdot.


  • Windows Product Activation Creator Reveals Truth Behind XP's Most Notorious Product Key
    Dave W. Plummer, the Microsoft developer who created Task Manager and helped build Windows Product Activation, has revealed the origins of Windows XP's most notorious product key. The alphanumeric string FCKGW-RHQQ2-YXRKT-8TG6W-2B7Q8 was not cracked through clever hacking but leaked as a legitimate volume licensing key five weeks before XP's October 2001 release. A warez group distributed the key alongside special corporate installation media. Windows Product Activation generated hardware IDs from system components and sent them to Microsoft for validation. The leaked volume licensing key bypassed this entirely. The system recognized it as corporate licensing and skipped phone-home activation. Users could install XP without activation prompts or 30-day timers. Microsoft later blacklisted the key.


    Read more of this story at Slashdot.


  • Internet Archive Ordered To Block Books in Belgium After Talks With Publishers Fail
    The Internet Archive must block access to books in its Open Library project for Belgian users after negotiations with publishers failed. A Brussels Business Court issued a site-blocking order in July targeting several shadow libraries and the Internet Archive. A Belgian government department paused the order for the U.S. nonprofit and urged both parties to negotiate. The talks over recent weeks were unsuccessful. The Department for Combating Infringements of Copyright concluded last week that the Internet Archive hosts the contested books and has the ability to render them inaccessible. Publishers must supply a list of books to be blocked. The nonprofit then has 20 calendar days to implement the measures and prevent future digital lending of those works in Belgium. The order includes a one-time penalty of $578,000 for non-compliance and remains in place until July 16 next year. The Internet Archive operates Open Library by purchasing physical copies and digitizing them to lend out one at a time. Publishers previously won a U.S. federal court case against the project.


    Read more of this story at Slashdot.


  • Judge Dismisses Retail Group's Challenge To New York Surveillance Pricing Law
    A federal judge has dismissed a lawsuit by the National Retail Federation challenging a New York state law that requires retailers to tell customers when their personal data are used to set prices, known as surveillance pricing. From a report: U.S. District Judge Jed Rakoff in Manhattan said the world's largest retail trade group did not plausibly allege that New York's Algorithmic Pricing Disclosure Act violated its members' free speech rights under the Constitution's First Amendment. The first-in-the-nation law required retailers to disclose in capital letters when prices were set by algorithms using personal data, or face possible civil fines of $1,000 per violation. Governor Kathy Hochul said charging different prices depending on what people were willing to pay was "opaque," and prevented comparison-shopping.


    Read more of this story at Slashdot.


  • Intel's Next-Generation Panther Lake Laptop Chips Could Be a Return To Form
    Intel today announced its Panther Lake laptop processors, consolidating the confusing split between Lunar Lake and Arrow Lake chips that define its current generation. The new processors use a unified architecture across all models instead of mixing different technologies at different price points. Panther Lake comes in three configurations. An 8-core model targets mainstream ultrabooks. A 16-core version adds PCI Express lanes for gaming laptops and workstations with discrete GPUs. A third 16-core variant with 12 Xe3 graphics cores aims at high-end thin-and-light laptops without dedicated graphics cards. All three chips use the same Cougar Cove P-cores, Darkmont E-cores, and Xe3 GPU architecture. They share an NPU capable of 50 trillion operations per second and identical media encoding capabilities. The main differences are core counts and I/O options rather than fundamental architectural variations. The approach contrasts with Intel's current Core Ultra 200 series. Lunar Lake chips integrated RAM on-package and used the latest Battlemage GPU architecture but were mostly used in high-end thin laptops. Arrow Lake processors offered more flexibility but paired newer CPU cores with older graphics and an NPU that did not meet Microsoft Copilot+ requirements. Intel claims Panther Lake delivers up to 10% better single-threaded performance than Lunar Lake and up to 50% faster multi-threaded performance than both previous generations. The GPU is roughly 50% quicker. Power consumption drops 10% compared to Lunar Lake and 40% versus Arrow Lake. The chips use Intel's 18A manufacturing process for the compute tile. TSMC fabricates the platform controller tile. Intel said systems with Panther Lake processors should ship by the end of 2025.


    Read more of this story at Slashdot.


  • ISPs Created So Many Fees That FCC Will Kill Requirement To List Them All
    FCC Chairman Brendan Carr says Internet service providers shouldn't have to list every fee they charge. From a report: Responding to a request from cable and telecom lobby groups, he is proposing to eliminate a rule that requires ISPs to itemize various fees in broadband price labels that must be made available to consumers. The rule took effect in April 2024 after the FCC rejected ISPs' complaints that listing every fee they created would be too difficult. The rule applies specifically to recurring monthly fees "that providers impose at their discretion, i.e., charges not mandated by a government." ISPs could comply with the rule either by listing the fees or by dropping the fees altogether and, if they choose, raising their overall prices by a corresponding amount. But the latter option wouldn't fit with the strategy of enticing customers with a low advertised price and hitting them with the real price on their monthly bills. The broadband price label rules were created to stop ISPs from advertising misleadingly low prices. This week, Carr scheduled an October 28 vote on a Notice of Proposed Rulemaking (NPRM) that proposes eliminating several of the broadband-label requirements. One of the rules in line for removal requires ISPs to "itemize state and local passthrough fees that vary by location." The FCC would seek public comment on the plan before finalizing it.


    Read more of this story at Slashdot.


  • DC Comics Won't Support Generative AI: 'Not Now, Not Ever'
    An anonymous reader shares a report: DC Comics president and publisher Jim Lee said that the company "will not support AI-generated storytelling or artwork," assuring fans that its future will remain rooted in human creativity. "Not now, not ever, as long as [SVP, general manager] Anne DePies and I are in charge," Lee said during his panel at New York Comic Con on Wednesday, likening concerns around AI dominating future creative industries to the Millennium bug scare and NFT hype. "People have an instinctive reaction to what feels authentic. We recoil from what feels fake. That's why human creativity matters," said Lee. "AI doesn't dream. It doesn't feel. It doesn't make art. It aggregates it."


    Read more of this story at Slashdot.


  • McKinsey Wonders How To Sell AI Apps With No Measurable Benefits
    Software vendors keen to monetize AI should tread cautiously, since they risk inflating costs for their customers without delivering any promised benefits such as reducing employee head count. From a report: The latest report from McKinsey & Company mulls what software-as-a-service (SaaS) vendors need to do to navigate the minefield of hype that surrounds AI and successfully fold such capabilities into their offerings. According to the consultancy, there are three main challenges it identifies as holding back broader growth in AI software monetization in the report. One of these is simply the inability to show any savings that can be expected. Many software firms trumpet potential use cases for AI, but only 30 percent have published quantifiable return on investment from real customer deployments. Meanwhile, many customers see AI hiking IT costs without being able to offset these by slashing labor costs. The billions poured into developing AI models mean they don't come cheap, and AI-enabling the entire customer service stack of a typical business could lead to a 60 to 80 percent price increase, McKinsey says, while quoting an HR executive at a Fortune 100 company griping: "All of these copilots are supposed to make work more efficient with fewer people, but my business leaders are also saying they can't reduce head count yet." Another challenge is scaling up adoption after introduction, which the report blames on underinvestment in change management. It says that for every $1 spent on model development, firms should expect to have to spend $3 on change management, which means user training and performance monitoring. The third issue is a lack of predictable pricing, which means that customers find it hard to forecast how their AI costs will scale with usage because the pricing models are often complex and opaque.


    Read more of this story at Slashdot.


The Register


  • Amazon's Quick Suite is like agentic AI training wheels for enterprises
    Slow down there Andy; you wouldn't want to bump into any hallucinations
    Despite ongoing concerns over the accuracy, reliability, and trustworthiness of AI in the enterprise, Amazon believes that if it can just make building agents easier for the average worker, they'll be automating the boring parts of their job in no time.…


  • Google rearranges Agentspace into Gemini Enterprise
    A new spin on workflow automation as Chocolate Factory tries to displace Microsoft as the enterprise go-to
    Google on Thursday announced the launch of Gemini Enterprise, a platform for automating business workflows using the company's Gemini family of machine learning models.…


  • Crims had 3-month head start on defenders in Oracle EBS invasion
    The miscreants started their attack all the way back on July 10
    The raid on Oracle E-Business Suite (EBS) likely began as early as July - about three months before any public detections - with extortionists compromising "dozens" of organizations, a Google investigation has determined.…



  • Space Shuttle war of words takes off as senator blasts 'woke Smithsonian'
    Houston, we have a custody battle
    Exclusive The war of words over the possible relocation of Space Shuttle Discovery has ratcheted up, with the office of Senator John Cornyn (R-TX) telling The Register that the orbiter belongs in Houston "whether the woke Smithsonian and its cronies in Congress like it or not."…





  • Gartner warns agentic AI startups: Prepare to be consolidated
    Analyst predicts over-supply will trigger a market correction in favor of deep-pocketed incumbents
    Gartner has signaled that the supply of "agentic AI" in terms of models, platforms, and products far outstrips demand, creating a situation that will lead to consolidation and market correction.…


Polish Linux

  • Security: Why Linux Is Better Than Windows Or Mac OS
    Linux is a free and open source operating system that was released in 1991 developed and released by Linus Torvalds. Since its release it has reached a user base that is greatly widespread worldwide. Linux users swear by the reliability and freedom that this operating system offers, especially when compared to its counterparts, windows and [0]


  • Essential Software That Are Not Available On Linux OS
    An operating system is essentially the most important component in a computer. It manages the different hardware and software components of a computer in the most effective way. There are different types of operating system and everything comes with their own set of programs and software. You cannot expect a Linux program to have all [0]


  • Things You Never Knew About Your Operating System
    The advent of computers has brought about a revolution in our daily life. From computers that were so huge to fit in a room, we have come a very long way to desktops and even palmtops. These machines have become our virtual lockers, and a life without these network machines have become unimaginable. Sending mails, [0]


  • How To Fully Optimize Your Operating System
    Computers and systems are tricky and complicated. If you lack a thorough knowledge or even basic knowledge of computers, you will often find yourself in a bind. You must understand that something as complicated as a computer requires constant care and constant cleaning up of junk files. Unless you put in the time to configure [0]


  • The Top Problems With Major Operating Systems
    There is no such system which does not give you any problems. Even if the system and the operating system of your system is easy to understand, there will be some times when certain problems will arise. Most of these problems are easy to handle and easy to get rid of. But you must be [0]


  • 8 Benefits Of Linux OS
    Linux is a small and a fast-growing operating system. However, we can’t term it as software yet. As discussed in the article about what can a Linux OS do Linux is a kernel. Now, kernels are used for software and programs. These kernels are used by the computer and can be used with various third-party software [0]


  • Things Linux OS Can Do That Other OS Cant
    What Is Linux OS?  Linux, similar to U-bix is an operating system which can be used for various computers, hand held devices, embedded devices, etc. The reason why Linux operated system is preferred by many, is because it is easy to use and re-use. Linux based operating system is technically not an Operating System. Operating [0]


  • Packagekit Interview
    Packagekit aims to make the management of applications in the Linux and GNU systems. The main objective to remove the pains it takes to create a system. Along with this in an interview, Richard Hughes, the developer of Packagekit said that he aims to make the Linux systems just as powerful as the Windows or [0]


  • What’s New in Ubuntu?
    What Is Ubuntu? Ubuntu is open source software. It is useful for Linux based computers. The software is marketed by the Canonical Ltd., Ubuntu community. Ubuntu was first released in late October in 2004. The Ubuntu program uses Java, Python, C, C++ and C# programming languages. What Is New? The version 17.04 is now available here [0]


  • Ext3 Reiserfs Xfs In Windows With Regards To Colinux
    The problem with Windows is that there are various limitations to the computer and there is only so much you can do with it. You can access the Ext3 Reiserfs Xfs by using the coLinux tool. Download the tool from the  official site or from the  sourceforge site. Edit the connection to “TAP Win32 Adapter [0]


OSnews

  • Microsoft closes another loophole to enable local accounts in Windows 11
    It seems like Microsoft is continuing its quest to force Windows users to use Microsoft accounts instead of local accounts, despite the fact Microsoft accounts on Windows are half-baked and potentially incredibly dangerous. In the most recent Windows 11 Insider Preview Build (26220.6772), the company has closed a few more loopholes people were using to trick the Windows installer into allowing local user accounts. We are removing known mechanisms for creating a local account in the Windows Setup experience (OOBE). While these mechanisms were often used to bypass Microsoft account setup, they also inadvertently skip critical setup screens, potentially causing users to exit OOBE with a device that is not fully configured for use. Users will need to complete OOBE with internet and a Microsoft account, to ensure device is setup correctly. ↫ Amanda Langowski at the Windows Blogs It seems that the specific workaround removed with this change is executing the command start ms-cxh:localonly! in the command prompt during the installation process (you can access cmd.exe by pressing shift+F10 during installation). Several other workarounds have also been removed in recent years, making it ever harder for people forced to use Windows 11 to use a local account, like the gods intended. The only reason Microsoft is pushing online accounts this hard is that it makes it much, much easier for them to collect your data and wrestle control over your installation away from you. A regular, proper local account with additional online accounts for various services would work just as well for users, allowing them to mix and match exactly what kind of cloud services they want integrated into their operating system. However, leaving this choice to the user invariably means people arent going to be using whatever trash services Microsoft offers. And so, Microsoft will make that choice for you, whether you like it or not. There are a million reasons to stay away from the Windows version that must be making Dave Cutler cry, and the insistence on online accounts is but one of them. Its a perfect example of how Microsoft developers Windows not to make it better for its users, but to make it better for its bottom line. I wonder how much more Microsoft can squeeze its users before we see some sort of actual revolt. Windows used to just lack taste. These days, its also actively hostile.


  • Servo GTK: a widget to embed Servo in GTK4
    Servo, the Rust-based browsing engine spun off from Mozilla, keeps making progress every month, and this made Ignacio Casal Quinteiro wonder: what if we make a GTK widget so we can test Servo and compare it to WebKitGTK? As part of my job at Amazon I started working in a GTK widget which will allow embedding a Servo Webview inside a GTK application. This was mostly a research project just to understand the current state of Servo and whether it was at a good enough state to migrate from WebkitGTK to it. I have to admit that it is always a pleasure to work with Rust and the great gtk-rs bindings. Instead, Servo while it is not yet ready for production, or at least not for what we need in our product, it was simple to embed and to get something running in just a few days. The community is also amazing, I had some problems along the way and they were providing good suggestions to get me unblocked in no time. ↫ Ignacio Casal Quinteiro The code is now out there, and while not yet ready for widespread use, this will make it easier for GTK developer to periodically assess the state of Servo, hopefully some day concluding it can serve as a replacement for WebKitGTK.


  • Synology reverses policy banning third-party HDDs after NAS sales plummet
    Earlier this year, popular NAS vendor Synology announced it would start requiring some of its more expensive models to only use Synology-branded drives. It seems the uproar this announcement caused has had some real chilling effect on sales, and the company just cancelled its plans. Synology has backtracked on one of its most unpopular decisions in years. After seeing NAS sales plummet in 2025, the company has decided to lift restrictions that forced users to buy its own Synology hard drives. The policy, introduced earlier this year, made third-party HDDs from brands like Seagate and WD practically unusable in newer models such as the DS925+, DS1825+, and DS425+. That change didn’t go over well. Users immediately criticised Synology for trying to lock them into buying its much more expensive drives. Many simply refused to upgrade, and reviewers called out the move as greedy and shortsighted. According to some reports, sales of Synology’s 2025 NAS models dropped sharply in the months after the restriction was introduced. ↫ Hilbert Hagedoorn at Guru3D.com If you want to screw over your users to make a few more euros, its generally a good idea to first assess just how locked-in your users really are. Synology is but one of many companies making and selling NAS devices, and even building one yourself is stupidly easy these days. Theres an entire cottage industry of motherboards and enclosures specifically designed for this purpose, and there are countless easy-to-use software options out there, too. In other words, nobody is really locked into Synology, so any unpopular move by the company was bound to make people look elsewhere, only to discover there are tons of competing options to choose from. The market seems to have spoken, and Synology can only respond by reversing its decision. Honestly, I had almost forgotten what a healthy tech market with tons of competing options looks like.


  • MicroPythonOS: an Android-like operating system for microcontrollers like the ESP32
    MicroPythonOS is a lightweight, fast, and versatile operating system designed to run on microcontrollers like the ESP32 and desktop systems. With a modern Android-like touch screen UI, App Store, and Over-The-Air updates, it’s the perfect OS for innovators and developers. ↫ MicroPytonOS website Its quite neat to see this running in such a constrained environment, especially considering it comes with a graphical user interface, some basic applications, and niceties like OTA updates and an application repository. As the name implies, MicroPythonOS uses native MicroPython for application and driver development, making cross-platform portability from microcontrollers to regular PCs a possibility. Its built on the MicroPython runtime, with LVGL for graphics, packaged by the lvgl_micropython project. Its still relatively early in development, but its completely open source so anyone can help out and improve the project. Im personally not too well-versed in the world of microcontrollers like the popular ESP32, so Im not entirely sure just how capable other operating systems and platforms built on top if it are. This particular operating system seems to make it rather easy and straightforward for anyone to build and distribute an application for such microcontrollers, to a point where even an idiot like myself could relatively easily buy, say, an ESP32 kit with a display and assemble my own collection of small applications. To repeat myself, it simply looks neat.


  • Qualcomm gobbles up Arduino
    It was good while it lasted, I guess. Arduino will retain its independent brand, tools, and mission, while continuing to support a wide range of microcontrollers and microprocessors from multiple semiconductor providers as it enters this next chapter within the Qualcomm family. Following this acquisition, the 33M+ active users in the Arduino community will gain access to Qualcomm Technologies’ powerful technology stack and global reach. Entrepreneurs, businesses, tech professionals, students, educators, and hobbyists will be empowered to rapidly prototype and test new solutions, with a clear path to commercialization supported by Qualcomm Technologies’ advanced technologies and extensive partner ecosystem. ↫ Qualcomms press release Qualcomms track record when it comes to community engagement, open source, and long-term support are absolutely atrocious, and theres no way Arduino will be able to withstand the pressures from management. Weve seen this exact story play out a million times, and it always begins with lofty promises, and always ends with all of them being broken. I have absolutely zero faith Arduino will be able to continue to do its thing like it has. Arduino devices are incredibly popular, and it makes sense for Qualcomm to acquire them. If I were using Arduinos for my open source projects, Id be a bit on edge right now.


  • That small sliver of time where a QNX desktop was a real thing we did
    Bradford Morgan White has published an excellent retrospective of QNX, the realtime microkernel operating system focused on embedded use cases. The final paragraph made me sad, though. QNX is a fascinating operating system. It was extremely well designed from the start, and while it has been rewritten, the core ideas that allowed it survive for 45 years persist to this day. While I am sad that Photon was deprecated, the reasoning is sound. Most vendors using QNX either do not require a GUI, or they implement their own. For example, while Boston Dynamics uses QNX in their robots, they don’t really need Photon, and neither do SpaceX’s Falcon rockets. While cars certainly have displays, most vehicle makers desire their screen interfaces to have a unique look and feel. Of course, just stating these use cases of robots, rockets, and cars speaks to the incredible reliability and versatility of QNX. Better operating systems are possible, and QNX proves it. ↫ Bradford Morgan White at Abort Retry Fail Way back in 2004, before I even joined OSNews properly, I wrote about QNX as a desktop operating system, because back then I went through a short stint where I used QNX and its amazing Photon MicroGUI as my primary desktop. Back then, there was a short-lived but very enthusiastic community using QNX on desktops, sharing tips and findings, supported by one or two QNX employees who tried their best to support this fledgling community in the face of corporate indifference. Eventually, these QNX employees left the company, and QNX started making it clearer than ever that they were not, in any way, interested in people using QNX on desktops, and in all honesty, they were most likely correct. However, I still think we had something special there, and had QNX management decided to help us out, it couldve grown into something more sustainable. An open source QNX and Photon couldve had an impact. Using QNX on the desktop back then was much easier than you might imagine, with graphical package managers, capable browsers and email clients, a massive pile of open source packages, pretty great performance, and little to no need to ever leave the GUI and use a CLI. If your hardware was properly supported, you could have a great experience. One of the very small what-ifs! form the early 2000s.


  • Redox now multithreaded by default
    Can these months please stop passing us by this quickly? It seems were getting a monthly Redox update every other week now, and thats not right. Anyway, what have the people behind this Rust-based operating system been up to this past month? One of the biggest changes this month is that Redox is now multithreaded by default, at least on x86 machines. Unsurprisingly, this can enable some serious performance gains. Also contributing to performance improvements this month is inode data inlining for small files, and the installation is now a lot faster too. LZ4 compression has been added to Redox, saving storage space and improving performance. As far as ports go, theres a ton of new and improved ports, like OpenSSH, Nginx, PHP, Neovim, OpenSSL 3.x, and more. On top of that, theres a long list of low-level kernel improvements, driver changes, and relibc improvements, changes to the main website, and so on.


  • The case against generative AI: the numbers just dont add up (i.e., its a scam)
    Every single “vibe coding is the future,” “the power of AI,” and “AI job loss” story written perpetuates a myth that will only lead to more regular people getting hurt when the bubble bursts. Every article written about OpenAI or NVIDIA or Oracle that doesn’t explicitly state that the money doesn’t exist, that the revenues are impossible, that one of the companies involved burns billions of dollars and has no path to profitability, is an act of irresponsible make believe and mythos. ↫ Edward Zitron The numbers are clear. People arent paying for AI!, and those that do, are using up way more resources than theyre actually paying for. The profits required to make all of this work just arent realistic in any way, shape, or form. The money being pumped around doesnt even exist. Its a scam of such utterly massive proportions, its easier for many of us to just assume it cant possibly be one. Too big to fail? Too many promises to be a scam. Its going to be a bloodbath, but as usual when the finance and tech bros scam entire sectors, its us normal folk who will be left to foot the bill. Lets blame immigrants some more while we implement harsh austerity measures to bail out the billionaire class. Again.


  • Under pressure from US government, Apple removes ICEBlock application from the App Store
    Your lovely host, late last night: Google claims they won’t be sharing developer information with governments, but we all know that’s a load of bullshit, made all the more relevant after whatever the fuck this was. If you want to oppose the genocide in Gaza or warn people of ICE raids, and want to create an Android application to coordinate such efforts, you probably should not, and stick to more anonymous organising tools. ↫ Thom Holwerda Lets check in with how that other walled garden Google is trying to emulate is doing. Apple has removed ICEBlock, an app that allowed users to monitor and report the location of immigration enforcement officers, from the App Store. We created the App Store to be a safe and trusted place to discover apps,! Apple said in a statement to Business Insider. Based on information weve received from law enforcement about the safety risks associated with ICEBlock, we have removed it and similar apps from the App Store.! ↫ Katherine Tangalakis-Lippert, Peter Kafka, and Kwan Wei Kevin Tan for Business Insider Oh. Apple and Google are but mere extensions of the state apparatus. Think twice about what device you bring with you the next time you wish to protest your governments actions.


  • Google details Android developer certification requirement, and its as bad as we feared
    Google has been on a bit of a marketing blitz to try and counteract some of the negative feedback following its new developer verification requirement for Android applications, and while theyre using a lot of words, none of them seem to address the core concerns. It basically comes down to that they just dont care about the consequences this new requirement has for projects like F-Droid, nor are they really bothered by any of the legitimate privacy concerns this whole thing raises. If this new requirement is implemented in its current form, F-Droid will simply not be able to continue to exist in its current form. F-Droid builds the applications in its repository themselves and signs them, and developer verification does not fit into that picture at all. F-Droid works this way to ensure its applications are built from the publicly available sources, so developers cant sneak anything nefarious into any binaries they would otherwise be submitting themselves. The privacy angle doesnt seem to bother Google much, either, which shouldnt be a surprise to anyone. With this new requirement, Android application developers can simply no longer be anonymous, which has a variety of side-effects, not least of which is that anyone developing applications for, say, dissidents, can now no longer be anonymous. Google claims they wont be sharing developer information with governments, but we all know thats a load of bullshit, made all the more relevant after whatever the fuck this was. If you want to oppose the genocide in Gaza or warn people of ICE raids, and want to create an Android application to coordinate such efforts, you probably should not, and stick to more anonymous organising tools. Students and hobbyists are getting the short end of the stick, too, as Googles promised program specifically for these two groups is incredibly limited. Yes, it waves the $25 fee, but thats about the only positive here: Developers who register with Google as a student or hobbyist will face severe app distribution restrictions, namely a limit on the number of devices that can install their apps. To enforce this, any user wanting to install software from these developers must first retrieve a unique identifier from their device. The developer then has to input this identifier into the Android Developer Console to authorize that specific device for installation. ↫ Mishaal Rahman at Android Authority Google does waive the requirement for developer certification for one particular type of user, and in doing so, highlights the only group of users Google truly cares about: enterprise users. Any application installed by an enterprise on managed devices will not need to have its developer certified. Google states that in this particular use case, the enterprises IT department is responsible for any security issues that may arise. Isnt it funny how the only group of users who wont have to deal with this nonsense are companies who pay Google tons of money for their enterprise tools? The only way were going to get out of this is if any governments step up and put a stop to this. We can safely assume the United States government wont be on our side  theyre too busy with their recurring idiotic song-and-dance anyway  so our only hope is the European Commission stepping in, but Im not holding my breath. After all, Apples rules and regulations regarding installing applications outside of the App Store in the EU are not that different from what Google is going to do. While the EU is not happy with the details of Apples rules, their general gist seems to be okay with them. Im afraid governments wont be stepping in to stop this one.


Linux Journal - The Original Magazine of the Linux Community

  • Bringing Desktop Linux GUIs to Android: The Next Step in Graphical App Support
    by George Whittaker Introduction
    Android has long been focused on running mobile apps, but in recent years, features aimed at developers and power users have begun pushing its boundaries. One exciting frontier: running full Linux graphical (GUI) applications on Android devices. What was once a novelty is now gradually becoming more viable, and recent developments point toward much smoother, GPU-accelerated Linux GUI experiences on Android.

    In this article, we’ll trace how Linux apps have run on Android so far, explain the new architecture changes enabling GPU rendering, showcase early demonstrations, discuss remaining hurdles, and look at where this capability is headed.
    The State of Linux on Android TodayThe Linux Terminal App
    Google’s Linux Terminal app is the core interface for running Linux environments on Android. It spins up a virtual machine (VM), often booting Debian or similar, and lets users enter a shell, install packages, run command-line tools, etc.

    Initially, the app was limited purely to text / terminal-based Linux programs; graphical apps were not supported meaningfully. More recently, Google introduced support for launching GUI Linux applications in experimental channels.
    Limitations: Rendering & Performance
    Even now, most GUI Linux apps on Android are rendered in software, that is, all drawing happens on the CPU (via a software renderer) rather than using the device’s GPU. This leads to sluggish UI, high CPU usage, more thermal stress, and shorter battery life.

    Because of these limitations, running heavy GUI apps (graphics editors, games, desktop-level toolkits) has been more experimental than practical.
    What’s Changing: GPU-Accelerated Rendering
    The big leap forward is moving from CPU rendering to GPU-accelerated rendering, letting the device’s graphics hardware do the heavy lifting.
    Lavapipe (Current Baseline)
    At present, the Linux VM uses Lavapipe (a Mesa software rasterizer) to interpret GPU API calls on the CPU. This works, but is inefficient, especially for complex GUIs or animations.
    Introducing gfxstream
    Google is planning to integrate gfxstream into the Linux Terminal app. gfxstream is a GPU virtualization / forwarding technology: rather than reinterpreting graphics calls in software, it forwards them from the guest (Linux VM) to the host’s GPU directly. This avoids CPU overhead and enables near-native rendering speeds.
    Go to Full Article


  • Fedora 43 Beta Released: A Preview of What's Ahead
    by George Whittaker Introduction
    Fedora’s beta releases offer one of the earliest glimpses into the next major version of the distribution — letting users and developers poke, test, and report issues before the final version ships. With Fedora 43 Beta, released on September 16, 2025, the community begins the final stretch toward the stable Fedora 43.

    This beta is largely feature-complete: developers hope it will closely match what the final release looks like (barring last-minute fixes). The goal is to surface regression bugs, UX issues, and compatibility problems before Fedora 43 is broadly adopted.
    Release & Availability
    The Fedora Project published the beta across multiple editions and media — Workstation, KDE Plasma, Server, IoT, Cloud, and spins/labs where applicable. ISO images are available for download from the official Fedora servers.

    Users already running Fedora 42 can upgrade via the DNF system-upgrade mechanism. Some spins (e.g. Mate or i3) are not fully available across all architectures yet.

    Because it’s a beta, users should be ready to encounter bugs. Fedora encourages testers to file issues via the QA mailing list or Fedora’s issue tracking infrastructure.
    Major New Features & Changes
    Fedora 43 Beta brings many updates under the hood — some in visible user features, others in core tooling and system behavior.
    Kernel, Desktop & Session Updates
    Fedora 43 Beta is built on Linux kernel 6.17.

    The Workstation edition features GNOME 49.

    In a bold shift, Fedora removes GNOME X11 packages for the Workstation, making Wayland-only the default and only session for GNOME. Existing users are migrated to Wayland.

    On KDE, Fedora 43 Beta ships with KDE Plasma 6.4 in the Plasma edition.
    Installer & Package Management
    Fedora’s Anaconda installer gets a WebUI by default for all Spins, providing a more unified and modern install experience across desktop variants.

    The installer now uses DNF5 internally, phasing out DNF4 which is now in maintenance mode.

    Auto-updates are enabled by default in Fedora Kinoite, ensuring that systems apply updates seamlessly in the background with minimal user intervention.
    Programming & Core Tooling Updates
    The Python version in Fedora 43 Beta moves to 3.14, an early adoption to catch bugs before the upstream release.
    Go to Full Article


  • Linux Foundation Welcomes Newton: The Next Open Physics Engine for Robotics
    by George Whittaker Introduction
    Simulating physics is central to robotics: before a robot ever moves in the real world, much of its learning, testing, and control happens in a virtual environment. But traditional simulators often struggle to match real-world physical complexity, especially where contact, friction, deformable materials, and unpredictable surfaces are involved. That discrepancy is known as the sim-to-real gap, and it’s one of the biggest hurdles in robotics and embodied AI.

    On September 29th, the Linux Foundation announced that it is contributing Newton, a next-generation, GPU-accelerated physics engine, as a fully open, community-governed project. This move aims to accelerate robotics research, reduce barriers to entry, and ensure long-term sustainability under neutral governance.

    In this article, we’ll unpack what Newton is, how its architecture stands out, the role the Linux Foundation will play, early use cases and challenges, and what this could mean for the future of robotics and simulation.
    What Is Newton?
    Newton is a physics simulation engine designed specifically for roboticists and simulation researchers who want high fidelity, performance, and extensibility. It was conceived through collaboration among Disney Research, Google DeepMind, and NVIDIA. The recent contribution to the Linux Foundation transforms Newton into an open governance project, inviting broader community collaboration.
    Design Goals & Key Features
    GPU-accelerated simulation: Newton leverages NVIDIA Warp as its compute backbone, enabling physics computations on GPUs for much higher throughput than traditional CPU-based simulators.

    Differentiable physics: Newton allows gradients to be propagated through simulation steps, making it possible to integrate physics into learning pipelines (e.g. backpropagation through control parameters).

    Extensible and multi-solver architecture: Users or researchers can plug in custom solvers, mix models (rigid bodies, soft bodies, cloth), and tailor functionality for domain-specific needs.

    Interoperability via OpenUSD: Newton builds on OpenUSD (Universal Scene Description) to allow flexible data modeling of robots and environments, and easier integration with asset pipelines.

    Compatibility with MuJoCo-Warp: As part of the Newton project, the MuJoCo backbone is adapted (MuJoCo-Warp) for high-performance simulation within Newton’s framework.
    Go to Full Article


  • Kernel 6.15.4 Performance Tuned, Networking Polished, Stability Reinforced
    by George Whittaker Introduction
    In the life cycle of any kernel branch, patch releases, those minor “.x” updates, play a vital role in refining performance, patching regressions, and ironing out rough edges. Kernel 6.15.4 is one such release: it doesn’t bring headline features, but focuses squarely on stabilizing and optimizing the 6.15 series with targeted fixes in performance and networking.

    While version 6.15 already introduced several ambitious changes (filesystem improvements, networking enhancements, Rust driver infrastructure, etc.), the 6.15.4 update doubles down on making those changes more robust and efficient. In this article, we'll walk through the most significant improvements, what they mean for systems running 6.15.*, and how to approach updating.
    Release Highlights
    The official announcement of Kernel 6.15.4 surfaced around late June 2025. The release includes:

    A full source tarball (linux-6.15.4.tar.xz) and patches.

    Signature verification via PGP for integrity.

    A changelog/diff summary comparing 6.15.3 → 6.15.4.

    This update is not a major feature expansion; it’s a refinement release targeting performance regressions, network subsystem reliability, and bug fixes that emerged in prior 6.15.* builds.
    Performance Enhancements
    Because 6.15 already brought several ambitious changes to memory, I/O, scheduler, and mount semantics, many of the improvements in 6.15.4 are about smoothing interactions, avoiding regressions, and reclaiming performance in corner cases. While not all patches are publicly detailed in summaries, we can infer patterns based on what 6.15 introduced and what “performance patches” generally target.
    Memory & TLB Optimizations
    One often-painful cost in high-performance workloads is flushing translation lookaside buffers (TLBs) too aggressively. Kernel 6.15 had already begun to optimize broadcast TLB invalidation using AMD’s INVLPGB (for remote CPUs) to reduce overhead in multi-CPU environments. In 6.15.4, fixes likely target edge cases or regressions in those mechanisms, ensuring TLB invalidation is more efficient and consistent.

    Additionally, various memory management cleanups, object reuse, and page handling improvements tend to appear in patch releases. While not explicitly documented in the public summaries, such fixes help reduce fragmentation, locking contention, and latency in memory allocation.
    Go to Full Article


  • Python 3.13.5 Patch Release Packed with Fixes & Stability Boosts
    by George Whittaker Introduction
    On June 11, 2025, the Python core team released Python 3.13.5, the fifth maintenance update to the 3.13 line. This release is not about flashy new language features, instead, it addresses some pressing regressions and bugs introduced in 3.13.4. The “.5” in the version number signals that this is a corrective, expedited update rather than a feature-driven milestone.

    In this article, we’ll explore what motivated 3.13.5, catalog the key fixes, review changes inherited in the 3.13 stream, and discuss whether and how you should upgrade. We’ll also peek at implications for future Python releases.
    What Led to 3.13.5 (Release Context)
    Python 3.13 — released on October 7, 2024 — introduced several significant enhancements over 3.12, including a revamped interactive shell, experimental support for running without a Global Interpreter Lock (GIL), and preliminary JIT infrastructure.

    However, after releasing 3.13.4, the maintainers discovered several serious regressions. Thus, 3.13.5 was accelerated (rather than waiting for the next regular maintenance release) to correct these before they impacted a broader user base. In discussions preceding the release, it was noted the Windows extension module build broke under certain configurations, prompting urgent action.

    Because of this, 3.13.5 is a “repair” release — its focus is bug fixes and stability, not new capabilities. Nonetheless, it also inherits and stabilizes many of the improvements introduced earlier in 3.13.
    Key Fixes & Corrections
    While numerous smaller bugs are resolved in 3.13.5, three corrections stand out as primary drivers for the expedited update:
    GH-135151 — Windows extension build failure
    Under certain build configurations on Windows (for the non-free-threaded build), compiling extension modules failed. This was traced to the pyconfig.h header inadvertently enabling free-threaded builds. The patch restores proper alignment of configuration macros, ensuring extension builds succeed as before.
    GH-135171 — Generator expression TypeError delay
    In 3.13.4, generator expressions stopped raising a TypeError early when given a non-iterable. Instead, the error was deferred to the time of first iteration. 3.13.5 restores the earlier behavior of raising the TypeError at creation time when the supplied input is not iterable. This change avoids subtler runtime surprises for developers.
    Go to Full Article


  • Denmark’s Strategic Leap Replacing Microsoft Office 365 with LibreOffice for Digital Independence
    by George Whittaker
    In the summer of 2025, Denmark’s government put forward a major policy change in its digital infrastructure: moving away from using Microsoft Office 365, and in part, open-source its operations with LibreOffice. Below is an original account of what this entails, why it matters, how it’s being done, and what the risks and opportunities are.
    What’s Changing and What’s Not
    The Danish Ministry of Digital Affairs has committed to replacing Microsoft Office 365 with LibreOffice.

    Earlier reports said that Windows would also be entirely swapped-out for Linux, but those reports have since been corrected: Windows will remain in use on many devices for now.

    For LibreOffice, the adoption is being phased: about half of the ministry’s employees will begin using LibreOffice (and possibly Linux in some instances) in the summer months; the rest are expected to transition by autumn.
    Why Denmark Is Making This MoveDigital Sovereignty & Dependence
    A primary driver is the concern over reliance on large foreign tech companies, especially suppliers based outside Europe. By reducing dependency on proprietary software controlled by corporations abroad, Denmark aims to gain more control over its data, security, and updates.
    Cost and Licensing
    Proprietary software comes with licensing fees, recurring costs, and often tied contracts. Adopting open-source alternatives like LibreOffice can potentially reduce those long-term expenditures.
    Security, Transparency, Flexibility
    Open-source software tends to allow more auditability, quicker patching, and the ability to adapt tools or software behavior to specific local or regulatory requirements.
    Implementation Plan & TimelinePhase What happens Approximate Timing Phase 1 Begin by moving about 50% of Ministry of Digital Affairs employees to LibreOffice (and in selected cases, using Linux tools) Summer 2025 (mid-year) Phase 2 Full transition of the ministry’s office productivity tasks away from Microsoft Office 365 to LibreOffice Autumn 2025
     

    “Full” here is understood in the scope of office productivity tools (word processing, spreadsheets, slides, etc.), not necessarily replacing all legacy systems or moving everything off Windows.
    Challenges & Concerns
    While the vision is ambitious, there are several hurdles:
    Go to Full Article


  • Valve Survey Reveals Slight Retreat in Steam-on-Linux Share
    by George Whittaker Introduction
    Steam’s monthly Hardware & Software Survey, published by Valve, offers a window into what operating systems, hardware, and software choices its user base is making. It has become a key barometer for understanding trends in PC gaming, especially for less dominant platforms like Linux. The newest data shows that Linux usage among Steam users has edged downward subtly. While the drop is small, it raises interesting questions about momentum, hardware preferences, and what might lie ahead for Linux gaming.

    This article dives into the latest numbers, explores what may be pushing them to abandon Steam, and considers what it means for Linux users, developers, and Valve itself.
    Recent Figures: What the Data Shows
    June 2025 Survey Outcome: In June, Linux’s slice of Steam’s user base stood at 2.57%, down from approximately 2.69% in May — a decrease of 0.12 percentage points.

    Year-Over-Year Comparison: Looking back to June 2024, the Linux share was around 2.08%, so even with this recent slip, there’s still an upward trend compared to a year ago.

    Distribution Among Linux Users: A significant portion of Linux gamers are using Valve’s own SteamOS Holo (currying sizable usage numbers via Steam Deck and similar devices). In June, roughly one-third of the Linux user group was on SteamOS Holo.

    Hardware Insights:

    Among Linux users, AMD CPUs dominate: about 69% of Linux gamers use AMD in June.

    Contrast that with the Windows-only survey, where Intel still has about 60% CPU share to AMD’s 39%.
    Interpreting the Slip: What Might Be Behind the Dip
    Though the drop is modest, a number of factors likely combine to produce it. Here are possible causes:

    Statistical Noise & Normal Fluctuation Monthly survey results tend to vary a bit, especially for smaller share percentages. A 0.12% decrease could simply be part of the normal ebb and flow.

    Sampling and Survey Methodology

    Survey participation may shift by region, language, hardware type, or time of year. If fewer Linux users participated in a given month, the percentage would drop even if absolute numbers stayed flat.

    Language shifts in Steam’s usage have shown up before; changes in how many users set certain settings or respond could affect results.

    Latency or delays in uploading or processing survey data might also contribute to anomalies.

    External Hardware & Platform Trends
    Go to Full Article


  • Qt Creator 17 Ushers in a Fresh Look and Stronger CMake Integration
    by George Whittaker
    In June 2025, the Qt team officially rolled out Qt Creator 17, marking a notable milestone for developers who rely on this IDE for cross-platform Qt, C++, QML, and Python work. While there are many changes under the hood, two of the spotlighted improvements are its updated default visual style and significant enhancements in how CMake is supported. Below, we’ll explore these in depth, assess their impact, and offer guidance on how to adopt the new features smoothly.
    What's New in Qt Creator 17: A Snapshot
    Before zooming into the theme and CMake changes, here are some of the broader enhancements in version 17 to set context:

    The “2024” theme set (light and dark variants) — which first appeared in earlier versions — becomes the foundational appearance for all new installs.

    General polish across the UI: icon refreshes, more consistent spacing, and better contrast.

    Projects now bind run configurations more tightly to the build configurations. That means selecting a build (e.g. Debug vs Release) also constrains which run configurations apply.

    Upgraded C++ tooling (with LLVM 20.1.3), improved QML formatting options, enhanced Python (pyproject.toml) support, and refinements in version control & analysis tools.

    With that backdrop, let’s dive into the theme and CMake changes in more detail.
    A Refreshed Visual Identity: Default “2024” ThemesWhat Has Changed
    Qt Creator 17 makes the “2024” light and dark themes the standard look & feel for new installations. These themes had been available previously (since Qt Creator 15) but in this version become the out-of-the-box configuration.

    Other visual adjustments accompany the theme change:

    Icons throughout the IDE have been reviewed and updated so they align better with the new theme style.

    UI consistency is improved: spacing, contrast, and alignment between interface elements have been refined so that the environment feels more cohesive.
    Why These Changes Matter
    A theme isn't just aesthetics. The look and feel of an IDE affect user comfort, readability, efficiency, and even fatigue. Some benefits include:

    Improved clarity for long coding sessions: better contrast helps in low-ambient light or for users with visual sensitivity.

    Consistency across elements: less jarring visual transitions when switching between parts of the interface or when using external themes/plugins.

    Reduced setup friction: since the “2024” theme is now default, many users won’t need to hunt down or tweak theme settings just to get a modern, usable look.
    Go to Full Article


  • Windows 11 Powers Up WSL: How GPU Acceleration & Kernel Upgrades Change the Game
    by George Whittaker Introduction
    Windows Subsystem for Linux (WSL) has gradually become one of Microsoft’s key bridges for developers, data scientists, and power users who need Linux compatibility without leaving the Windows environment. Over recent versions, WSL2 brought major improvements: a real Linux kernel running in a lightweight virtualized environment, much better filesystem behavior, nearly full system-call compatibility, etc. However, until recently, certain high-performance workloads, GPU computing, video encoding/decoding, and very up-to-date kernel features, were either limited, inefficient, or unavailable.

    In Windows 11, Microsoft has taken bold strides to remove many of these bottlenecks. Two of the most significant enhancements are:

    The ability for WSL to tap into the GPU for acceleration (compute, video hardware offload, etc.), reducing reliance on CPU where the GPU is much more suited.

    More seamless Linux kernel upgrades, allowing users to run newer kernel versions inside WSL2, bringing performance, driver, and feature improvements faster.

    This article walks through each thing in detail: what has changed, why it matters, how to use it, what limitations still exist, and how these developments shift what’s possible with WSL on Windows 11.
    What WSL Was, and Where It Needed Improvement
    Before diving into recent changes, it helps to understand what WSL (especially WSL2) already provided, and where it lagged.

    WSL1: Early versions translated Linux system calls to Windows equivalents. Good for basic command-line tools, scripts, but limited in compatibility with certain networking, kernel module, filesystem, and performance-sensitive tasks.

    WSL2: Introduced a real Linux kernel inside a lightweight VM (Hyper-V or a similar backend), better system-call compatibility, better performance especially for Linux tools, and much improved behavior for things like Docker, compiling, etc. Still, heavy workloads (e.g. ML training, video encoding, hardware-accelerated graphics) were constrained by CPU support, lack of passthrough of GPU features, older kernels, etc.

    So developers were pushing Microsoft to allow more direct access to GPU functionality (CUDA, DirectML, video decoding), and to speed up how kernel updates reach users.
    GPU Acceleration in WSL on Windows 11: What It Means
    GPU acceleration here refers to WSL’s ability to offload certain computation or video tasks from the CPU to the GPU, enabling faster, more efficient execution. This includes:

    Compute workloads - frameworks like CUDA (for NVIDIA), DirectML, etc., so that things like deep learning, scientific computing, data-parallel tasks run much faster. Microsoft now supports running NVIDIA CUDA inside WSL to accelerate ML libraries like PyTorch, TensorFlow.
    Go to Full Article


  • Harnessing GitOps on Linux for Seamless, Git-First Infrastructure Management
    by George Whittaker Introduction
    Imagine a world where every server, application, and network configuration is meticulously orchestrated via Git, where updates, audits, and recoveries happen with a single commit. This is the realm GitOps unlocks, especially potent when paired with the versatility of Linux environments. In this article, we'll dive deep into how Git-driven workflows can transform the way you manage Linux infrastructure, offering clarity, control, and confidence in every change.
    GitOps Demystified: A New Infrastructure Paradigm
    GitOps isn't just a catchy buzzword, it's a methodical rethink of how infrastructure should be managed.

    It treats Git as the definitive blueprint for your live systems, everything from server settings to application deployments is declared, versioned, and stored in repositories.

    With Git as the single source of truth, every adjustment is tracked, reversible, and auditable, turning ops into a transparent, code-centric process.

    Beyond simple CI/CD, GitOps introduces a continuous reconciliation model: specialized agents continuously compare the actual state of systems against the desired state in Git and correct any discrepancies automatically.
    Why Linux and GitOps Are a Natural Pair
    Linux stands at the heart of infrastructure, servers, containers, edge systems, you name it. When GitOps is layered onto that:

    You'll leverage Linux’s scripting capabilities (like bash) to craft powerful, domain-specific automation that dovetails perfectly with GitOps agents.

    The transparency of Git coupled with Linux’s flexible architecture simplifies debugging, auditing, and recovery.

    The combination gives infrastructure teams the agility to iterate faster while keeping control rigorous and secure.
    Architecting GitOps Pipelines for Linux EnvironmentsStructuring Repositories Deliberately
    A well-organized Git setup is crucial:

    Use separate repositories or disciplined directory structures for:

    Infrastructure modules (e.g., Terraform, networking, VMs),

    Platform components (monitoring, ingress controllers, certificates),

    Application-level configurations (Helm overrides, container versions).

    This separation helps ensure access controls align with responsibilities and limits risks from misconfiguration or accidental cross-impact.
    Go to Full Article


Page last modified on November 02, 2011, at 10:01 PM