Recent Changes - Search:
NTLUG

Linux is free.
Life is good.

Linux Training
10am on Meeting Days!

1825 Monetary Lane Suite #104 Carrollton, TX

Do a presentation at NTLUG.

What is the Linux Installation Project?

Real companies using Linux!

Not just for business anymore.

Providing ready to run platforms on Linux

Show Descriptions... (Show All) (Single Column)

LinuxSecurity - Security Advisories







LXer Linux News






  • Diamond Technologies Adds New OSM Modules Based on i.MX 91, i.MX 93, and STM32MP2
    Diamond Technologies, Inc. has introduced three new OSM-based embedded computing modules: the DEC i.MX 91-OSM, DEC i.MX 93-OSM, and DEC MP2-OSM. Designed and supported in the United States, the modules expand the company’s embedded lineup with new processing, connectivity, and edge-level compute options. The new modules target connected industrial equipment, energy-efficient edge systems, and compact […]



  • Radxa Unveils Solder-Down rCore Module Line With RK3308 and IQ-9075 Edge AI Variants
    Radxa has introduced two new solder-down modules in its rCore series, using LGA or castellated interfaces for compact, vibration-resistant designs suited to industrial, transportation, and rugged IoT systems. The new rCore-RK3308 and rCore-Q9075 modules integrate processing, memory, storage, and connectivity within small footprints to support applications ranging from low-power IoT to high-performance edge AI. The […]




Error: It's not possible to reach RSS file http://www.newsforge.com/index.rss ...

Slashdot

  • Was the Moon-Forming Protoplanet 'Theia' a Neighbor of Earth?
    Theia crashed into earth and formed the moon, the theory goes. But then where did Theia come from? The lead author on a new study says "The most convincing scenario is that most of the building blocks of Earth and Theia originated in the inner Solar System. Earth and Theia are likely to have been neighbors." Though Theia was completely destroyed in the collision, scientists from the Max Planck Institute for Solar System Research led a team that was able to measure the ratio of tell-tale isotopes in Earth and Moon rocks, Euronews explains:The research team used rocks collected on Earth and samples brought back from the lunar surface by Apollo astronauts to examine their isotopes. These isotopes act like chemical fingerprints. Scientists already knew that Earth and Moon rocks are almost identical in their metal isotope ratios. That similarity, however, has made it hard to learn much about Theia, because it has been difficult to separate material from early Earth and material from the impactor. The new research attempts a kind of planetary reverse engineering. By examining isotopes of iron, chromium, zirconium and molybdenum, the team modelled hundreds of possible scenarios for the early Earth and Theia, testing which combinations could produce the isotope signatures seen today. Because materials closer to the Sun formed under different temperatures and conditions than those further out, those isotopes exist in slightly different patterns in different regions of the Solar System. By comparing these patterns, researchers concluded that Theia most likely originated in the inner Solar System, even closer to the Sun than the early Earth. The team published their findings in the journal Science. Its title? "The Moon-forming impactor Theia originated from the inner Solar System."


    Read more of this story at Slashdot.


  • Cryptologist DJB Criticizes Push to Finalize Non-Hybrid Security for Post-Quantum Cryptography
    In October cryptologist/CS professor Daniel J. Bernstein alleged that America's National SecurityAgency (and its UK counterpart GCHQ) were attempting to influence NIST to adopt weaker post-quantum cryptographystandards without a "hybrid" approach that would've also included pre-quantum ECC. Bernstein is of the opinion that "Given howmany post-quantum proposals have been broken and the continuing flood of side-channel attacks, any competent engineering evaluation will conclude thatthe best way to deploy post-quantum [PQ] encryption for TLS, and for the Internet more broadly, is as double encryption: post-quantum cryptography on top of ECC." Buthe says he's seen it playing out differently:By 2013, NSA had a quarter-billion-dollar-a-yearbudget to "covertly influence and/or overtly leverage"systems to "make the systems in question exploitable"; inparticular, to "influence policies, standards and specificationfor commercial public key technologies". NSA is quietlyusing stronger cryptography for the data it cares about, butmeanwhile is spending money to promote a market for weakenedcryptography, the same way that it successfully created decades ofsecurity failures by building up the market for, e.g., 40-bitRC4 and 512-bitRSA and Dual EC. I looked concretely at what was happening in IETF'sTLS working group, compared to the consensusrequirements for standards-development organizations. I reviewedhow a call for "adoption" of an NSA-driven specification produced a variety of objections that weren'thandled properly. ("Adoption" is a preliminary step before IETF standardization....) On 5 November 2025, the chairs issued "last call" for objections to publication of the document. The deadline for input is "2025-11-26", this coming Wednesday. Bernstein also shares concerns about how the Internet Engineering Task Force is handling the discussion, and argues that the document is even "out of scope" for theIETF TLS working groupThis document doesn't serve any of the official goals in the TLS working group charter. Most importantly, this document is directly contrary to the "improve security" goal, so it would violate the charter even if it contributed to another goal... Half of the PQ proposals submitted to NIST in 2017 have been broken already... often with attacks having sufficiently low cost to demonstrate onreadily available computer equipment. Further PQ software has been broken by implementation issues such as side-channel attacks. He's also concerned about how that discussion is being handled:On 17 October 2025, they posted a "Notice of Moderation for Postings by D. J. Bernstein" saying that they would "moderate the postings of D. J. Bernstein for 30 days due to disruptive behavior effective immediately" and specifically that my postings "will be held for moderation and after confirmation by the TLS Chairs of being on topic and not disruptive, will be released to the list"... I didn't send anything to the IETF TLS mailing list for 30 days after that. Yesterday [November 22nd] I finished writing up my new objection and sent that in. And, gee, after more than 24 hours it still hasn't appeared... Presumably the chairs "forgot" to flip the censorship button off after 30 days. Thanks to alanw (Slashdot reader #1,822) for spotting the blog posts.


    Read more of this story at Slashdot.


  • Google Revisits JPEG XL in Chromium After Earlier Removal
    "Three years ago, Google removed JPEG XL support from Chrome, stating there wasn't enough interest at the time," writes the blog Windows Report. "That position has now changed."In a recent note to developers, a Chrome team representative confirmed that work has restarted to bring JPEG XL to Chromium and said Google "would ship it in Chrome" once long-term maintenance and the usual launch requirements are met. The team explained that other platforms moved ahead. Safari supports JPEG XL, and Windows 11 users can add native support through an image extension from Microsoft Store. The format is also confirmed for use in PDF documents. There has been continuous demand from developers and users who ask for its return. Before Google ships the feature in Chrome, the company wants the integration to be secure and supported over time. A developer has submitted new code that reintroduces JPEG XL to Chromium. This version is marked as feature complete. The developer said it also "includes animation support," which earlier implementations did not offer.


    Read more of this story at Slashdot.


  • Mozilla Announces 'TABS API' For Developers Building AI Agents
    "Fresh from announcing it is building an AI browsing mode in Firefox and laying the groundwork for agentic interactions in the Firefox 145 release, the corp arm of Mozilla is now flexing its AI muscles in the direction of those more likely to care," writes the blog OMG Ubuntu:If you're a developer building AI agents, you can sign up to get early access to Mozilla's TABS API, a "powerful web content extraction and transformation toolkit designed specifically for AI agent builders"... The TABS API enables devs to create agents to automate web interactions, like clicking, scrolling, searching, and submitting forms "just like a human". Real-time feedback and adaptive behaviours will, Mozilla say, offer "full control of the web, without the complexity." As TABS is not powered by a Mozilla-backed LLM you'll need to connect it to your choice of third-party LLM for any relevant processing... Developers get 1,000 requests monthly on the free tier, which seems reasonable for prototyping personal projects. Complex agentic workloads may require more. Though pricing is yet to be locked in, the TABS API website suggests it'll cost ~$5 per 1000 requests.Paid plans will offer additional features too, like lower latency and, somewhat ironically, CAPTCHA solving so AI can 'prove' it's not a robot on pages gated to prevent automated activities. Google, OpenAI, and other major AI vendors offer their own agentic APIs. Mozilla is pitching up late, but it plans to play differently. It touts a "strong focus on data minimisation and security", with scraped data treated ephemerally — i.e., not kept. As a distinction, that matters. AI agents can be given complex online tasks that involve all sorts of personal or sensitive data being fetched and worked with.... If you're minded to make one, perhaps without a motivation to asset-strip the common good, Mozilla's TABS API look like a solid place to start.


    Read more of this story at Slashdot.


  • One Company's Plan to Sink Nuclear Reactors Deep Underground
    Long-time Slashdot reader jenningsthecat shared this article from IEEE Spectrum:By dropping a nuclear reactor 1.6 kilometers (1 mile) underground, Deep Fission aims to use the weight of a billion tons of rock and water as a natural containment system comparable to concrete domes and cooling towers. With the fission reaction occurring far below the surface, steam can safely circulate in a closed loop to generate power. The California-based startup announced in October that prospective customers had signed non-binding letters of intent for 12.5 gigawatts of power involving data center developers, industrial parks, and other (mostly undisclosed) strategic partners, with initial sites under consideration in Kansas, Texas, and Utah... The company says its modular approach allows multiple 15-megawatt reactors to be clustered on a single site: A block of 10 would total 150 MW, and Deep Fission claims that larger groupings could scale to 1.5 GW. Deep Fission claims that using geological depth as containment could make nuclear energy cheaper, safer, and deployable in months at a fraction of a conventional plant's footprint... The company aims to finalize its reactor design and confirm the pilot site in the coming months. [Company founder Liz] Muller says the plan is to drill the borehole, lower the canister, load the fuel, and bring the reactor to criticality underground in 2026. Sites in Utah, Texas, and Kansas are among the leading candidates for the first commercial-scale projects, which could begin construction in 2027 or 2028, depending on the speed of DOE and NRC approvals. Deep Fission expects to start manufacturing components for the first unit in 2026 and does not anticipate major bottlenecks aside from typical long-lead items. In short "The same oil and gas drilling techniques that reliably reach kilometer-deep wells can be adapted to host nuclear reactors..." the article points out. Their design would also streamline construction, since "Locating the reactors under a deep water column subjects them to roughly 160 atmospheres of pressure — the same conditions maintained inside a conventional nuclear reactor — which forms a natural seal to keep any radioactive coolant or steam contained at depth, preventing leaks from reaching the surface." Other interesting points from the article:They plan on operating and controlling the reactor remotely from the surface.Company founder Muller says if an earthquake ever disrupted the site, "you seal it off at the bottom of the borehole, plug up the borehole, and you have your waste in safe disposal."For waste management, the company "is eyeing deep geological disposal in the very borehole systems they deploy for their reactors.""The company claims it can cut overall costs by 70 to 80 percent compared with full-scale nuclear plants.""Among its competition are projects like TerraPower's Natrium, notesthe tech news site Hackaday, saying TerraPower's fast neutron reactors "are already under construction and offer much more power per reactor, along with Natrium in particular also providing built-in grid-level storage. "One thing is definitely for certain..." they add. "The commercial power sector in the US has stopped being mind-numbingly boring."


    Read more of this story at Slashdot.


  • Could High-Speed Trains Shorten US Travel Times While Reducing Emissions?
    With some animated graphics, CNN "reimagined" what three of America's busiest air and road travel routes would look like with high-speed trains, for "a glimpse into a faster, more connected future."The journey from New York City to Chicago could take just over six hours by high-speed train at an average speed of 160 mph, cutting travel time by more than 13 hours compared with the current Amtrak route... The journey from San Francisco to Los Angeles could be completed in under three hours by high-speed train... The journey from Atlanta to Orlando could be completed in under three hours by high-speed train that reaches 160 mph, cutting travel time by over half compared with driving... While high-speed rail remains a fantasy in the United States, it is already hugely successful across the globe. Passengers take 3 billion trips annually on more than 40,000 miles of modern high-speed railway across the globe, according to the International Union of Railways. China is home to the world's largest high-speed rail network. The 809-mile train journey from Beijing to Shanghai takes just four and a half hours... In Europe, France's Train a Grand Vitesse (TGV) is recognized as a pioneer of high-speed rail technology. Spain soon followed France's success and now hosts Europe's most extensive high-speed rail network... [T]rain travel contributes relatively less pollution of every type, said Jacob Mason of the Institute for Transportation and Development Policy, from burning less gasoline to making less noise than cars and taking up less space than freeways. The reduction in greenhouse gas emissions is staggering: Per kilometer traveled, the average car or a short-haul flight each emit more than 30 times the CO2 equivalent than Eurostar high-speed trains, according to data from the UK government.


    Read more of this story at Slashdot.


  • Microsoft and GitHub Preview New Tool That Identifies, Prioritizes, and Fixes Vulnerabilities With AI
    "Security, development, and AI now move as one," says Microsoft's director of cloud/AI securityproduct marketing. Microsoft and GitHub "have launched a native integration between Microsoft Defender for Cloud and GitHub Advanced Security that aims to address what one executive calls decades of accumulated security debt in enterprise codebases..." according to The New Stack:The integration, announced this week in San Francisco at theMicrosoftIgnite 2025 conference and now available in public preview,connects runtime intelligence from production environments directlyinto developer workflows. The goal is to help organizationsprioritize which vulnerabilities actually matter and use AI to fixthem faster. "Throughout my career, I've seen vulnerabilitytrends going up into the right. It didn't matter how good of adetectionengine and how accurate our detection engine was, people justcouldn't fix things fast enough," said MarceloOliveira, VP of product management at GitHub, who has spentnearly a decade in application security. "That basically resultedin decades of accumulation of security debt into enterprise codebases." According to industry data, critical and high-severityvulnerabilities constitute 17.4% of security backlogs, with a meantime to remediation of 116 days, said AndrewFlick, senior director of developer services, languages and toolsat Microsoft, in a blogpost. Meanwhile, applications face attacks as frequently as onceevery three minutes, Oliveira said. The integration represents the first native link between runtimeintelligence and developer workflows, said ElifAlgedik, director of product marketing for cloud and AI securityat Microsoft, in a blogpost... The problem, according to Flick, comes down to threechallenges: security teams drowning in alert fatigue while AI rapidlyintroduces new threatvectors that they have little time to understand; developerslacking clear prioritization while remediation takes too long; andboth teams relying on separate, nonintegrated tools that makecollaboration slow and frustrating... The new integration worksbidirectionally. When Defender for Cloud detects a vulnerability in arunning workload, that runtime context flows into GitHub, showingdevelopers whether the vulnerability is internet-facing, handlingsensitive data or actually exposed in production. This is powered bywhat GitHub calls the Virtual Registry, which creates code-to-runtimemapping, Flick said... In the past, this alert would age in a dashboard while developersworked on unrelated fixes because they didn't know this was thecritical one, he said. Now, a security campaign can be created inGitHub, filtering for runtime risk like internet exposure orsensitive data, notifying the developer to prioritize this issue. GitHub Copilot "now automatically checks dependencies, scansfor first-party code vulnerabilities and catches hardcoded secretsbefore code reaches developers," the article points out — butGitHub's VP of product management says this takes things evenfurther. "We're not only helping you fix existing vulnerabilities,we're also reducing the number of vulnerabilities that come intothe system when the level of throughput of new code being created isincreasing dramatically with all these agentic coding agent platforms."


    Read more of this story at Slashdot.


  • Engineers are Building the Hottest Geothermal Power Plant on Earth - Next to a US Volcano
    "On the slopes of an Oregon volcano, engineers are building the hottest geothermal power plant on Earth," reports the Washington Post:The plant will tap into the infernal energy of Newberry Volcano, "one of the largest and most hazardous active volcanoes in the United States," according to the U.S. Geological Survey. It has already reached temperatures of 629 degrees Fahrenheit, making it one of the hottest geothermal sites in the world, and next year it will start selling electricity to nearby homes and businesses. But the start-up behind the project, Mazama Energy, wants to crank the temperature even higher — north of 750 degrees — and become the first to make electricity from what industry insiders call "superhot rock." Enthusiasts say that could usher in a new era of geothermal power, transforming the always-on clean energy source from a minor player to a major force in the world's electricity systems. "Geothermal has been mostly inconsequential," said Vinod Khosla, a venture capitalist and one of Mazama Energy's biggest financial backers. "To do consequential geothermal that matters at the scale of tens or hundreds of gigawatts for the country, and many times that globally, you really need to solve these high temperatures." Today, geothermal produces less than 1 percent of the world's electricity. But tapping into superhot rock, along with other technological advances, could boost that share to 8 percent by 2050, according to the International Energy Agency (IEA). Geothermal using superhot temperatures could theoretically generate 150 times more electricity than the world uses, according to the IEA. "We believe this is the most direct path to driving down the cost of geothermal and making it possible across the globe," said Terra Rogers, program director for superhot rock geothermal at the Clean Air Task Force, an environmentalist think tank. "The [technological] gaps are within reason. These are engineering iterations, not breakthroughs." The Newberry Volcano project combines two big trends that could make geothermal energy cheaper and more widely available. First, Mazama Energy is bringing its own water to the volcano, using a method called "enhanced geothermal energy"... [O]ver the past few decades, pioneering projects have started to make energy from hot dry rocks by cracking the stone and pumping in water to make steam, borrowing fracking techniques developed by the oil and gas industry... The Newberry project also taps into hotter rock than any previous enhanced geothermal project. But even Newberry's 629 degrees fall short of the superhot threshold of 705 degrees or above. At that temperature, and under a lot of pressure, water becomes "supercritical" and starts acting like something between a liquid and a gas. Supercritical water holds lots of heat like a liquid, but it flows with the ease of a gas — combining the best of both worlds for generating electricity... [Sriram Vasantharajan, Mazama's CEO] said Mazama will dig new wells to reach temperatures above 750 degrees next year. Alongside an active volcano, the company expects to hit that temperature less than three miles beneath the surface. But elsewhere, geothermal developers might have to dig as deep as 12 miles. While Mazama plans to generate 15 megawatts of electricity next year, it hopes to eventually increase that to 200 megawatts. (And the company's CEO said it could theoretically generate five gigawatts of power.) But more importantly, successful projects "motivate other players to get into the market," according to a senior geothermal research analyst at energy consultancy Wood Mackenzie, who predicted "a ripple effect," to the Washington Post where "we'll start seeing more companies get the financial support to kick off their own pilots."


    Read more of this story at Slashdot.


  • How the Internet Rewired Work - and What That Tells Us About AI's Likely Impact
    "The internet did transform work — but not the way 1998 thought..." argues the Wall Street Journal. "The internet slipped inside almost every job and rewired how work got done." So while the number of single-task jobs like travel agent dropped, most jobs "are bundles of judgment, coordination and hands-on work," and instead the internet brought "the quiet transformation of nearly every job in the economy... Today, just 10% of workers make minimal use of the internet on the job — roles like butcher and carpet installer."[T]he bigger story has been additive. In 1998, few could conceive of social media — let alone 65,000 social-media managers — and 200,000 information-security analysts would have sounded absurd when data still lived on floppy disks... Marketing shifted from campaign bursts to always-on funnels and A/B testing. Clinics embedded e-prescribing and patient portals, reshaping front-office and clinical handoffs. The steps, owners and metrics shifted. Only then did the backbone scale: We went from server closets wedged next to the mop sink to data centers and cloud regions, from lone system administrators to fulfillment networks, cybersecurity and compliance. That is where many unexpected jobs appeared. Networked machines and web-enabled software quietly transformed back offices as much as our on-screen lives. Similarly, as e-commerce took off, internet-enabled logistics rewired planning roles — logisticians, transportation and distribution managers — and unlocked a surge in last-mile work. The build-out didn't just hire coders; it hired coordinators, pickers, packers and drivers. It spawned hundreds of thousands of warehouse and delivery jobs — the largest pockets of internet-driven job growth, and yet few had them on their 1998 bingo card... Today, the share of workers in professional and managerial occupations has more than doubled since the dawn of the digital era. So what does that tell us about AI? Our mental model often defaults to an industrial image — John Henry versus the steam drill — where jobs are one dominant task, and automation maps one-to-one: Automate the task, eliminate the job. The internet revealed a different reality: Modern roles are bundles. Technologies typically hit routine tasks first, then workflows, and only later reshape jobs, with second-order hiring around the backbone. That complexity is what made disruption slower and more subtle than anyone predicted. AI fits that pattern more than it breaks it... [LLMs] can draft briefs, summarize medical notes and answer queries. Those are tasks — important ones — but still parts of larger roles. They don't manage risk, hold accountability, reassure anxious clients or integrate messy context across teams. Expect a rebalanced division of labor: The technical layer gets faster and cheaper; the human layer shifts toward supervision, coordination, complex judgment, relationship work and exception handling. What to expect from AI, then, is messy, uneven reshuffling in stages. Some roles will contract sharply — and those contractions will affect real people. But many occupations will be rewired in quieter ways. Productivity gains will unlock new demand and create work that didn't exist, alongside a build-out around data, safety, compliance and infrastructure. AI is unprecedented; so was the internet. The real risk is timing: overestimating job losses, underestimating the long, quiet rewiring already under way, and overlooking the jobs created in the backbone. That was the internet's lesson. It's likely to be AI's as well.


    Read more of this story at Slashdot.


  • Microsoft Warns Its Windows AI Feature Brings Data Theft and Malware Risks, and 'Occasionally May Hallucinate'
    "Copilot Actions on Windows 11" is currently available in Insider builds (version 26220.7262) as part of Copilot Labs, according to a recent report, "and is off by default, requiring admin access to set it up." But maybe it's off for a good reason...besides the fact that it can access any apps installed on your system:In a support document, Microsoft admits that features like Copilot Actions introduce " novel security risks ." They warn about cross-prompt injection (XPIA), where malicious content in documents or UI elements can override the AI's instructions. The result? " Unintended actions like data exfiltration or malware installation ." Yeah, you read that right. Microsoft is shipping a feature that could be tricked into installing malware on your system. Microsoft's own warning hits hard: "We recommend that you only enable this feature if you understand the security implications." When you try to enable these experimental features, Windows shows you a warning dialog that you have to acknowledge. ["This feature is still being tested and may impact the performance or security of your device."] Even with these warnings, the level of access Copilot Actions demands is concerning. When you enable the feature, it gets read and write access to your Documents, Downloads, Desktop, Pictures, Videos, and Music folders... Microsoft says they are implementing safeguards. All actions are logged, users must approve data access requests, the feature operates in isolated workspaces, and the system uses audit logs to track activity. But you are still giving an AI system that can "hallucinate and produce unexpected outputs" (Microsoft's words, not mine) full access to your personal files. To address this, Ars Technica notes, Microsoft added this helpful warning to its support document this week. "As these capabilities are introduced, AI models still face functional limitations in terms of how they behave and occasionally may hallucinate and produce unexpected outputs." But Microsoft didn't describe "what actions they should take to prevent their devices from being compromised. I asked Microsoft to provide these details, and the company declined..."


    Read more of this story at Slashdot.


The Register

  • Weaponized file name flaw makes updating glob an urgent job
    PLUS: CISA issues drone warning; China-linked DNS-hijacking malware; Prison for BTC Samourai; And more
    Infosec In Brief Researchers have urged users of the glob file pattern matching library to update their installations, after discovery of a years-old remote code execution flaw in the tool's CLI.…


  • Bossware booms as bots determine whether you're doing a good job
    A lot of companies are turning to employee monitoring tools to make sure workers aren't slacking off
    The COVID-19 lockdown meant a surge in remote work, and the trend toward remote and hybrid workplaces has persisted long after the pandemic receded. That has changed the nature of workplace management as well. Bosses can't check for butts in seats or look over their employees' shoulders in the office to make sure they're working instead of having a LAN party. So they've turned to software tools to fill the gap.…



  • Copackaged optics have officially found their killer app - of course it's AI
    With power in such short supply, every watt counts
    SC25 Power is becoming a major headache for datacenter operators as they grapple with how to support ever larger deployments of GPU servers - so much so that the AI boom is now driving the adoption of a technology once thought too immature and failure-prone to merit the risk.…


  • Self-destructing thumb drive can brick itself and wipe your secret files away
    Catch: you have to plug it into a computer first
    If you’ve ever watched Mission Impossible, where Jim Phelps gets instructions from an audio tape that catches fire after five seconds, TeamGroup has an external SSD with your name on it. The T-Create Expert P35S is a portable USB-powered SSD that comes with a self-destruct button, which wipes all your data and physically renders the device useless.…


  • Researchers get inside the mind of bots, find out what texts they trained on
    RECAP agent overcomes model alignment efforts to hide memorized proprietary content
    If you've ever wondered whether that chatbot you're using knows the entire text of a particular book, answers are on the way. Computer scientists have developed a more effective way to coax memorized content from large language models, a development that may address regulatory concerns while helping to clarify copyright infringement claims arising from AI model training and inference.…



  • Makers slam Qualcomm for tightening the clamps on Arduino
    But the Wiring folks were disenchanted even before Qualcomm swallowed Arduino
    Qualcomm quietly rewrote the terms of service for its newest acquisition, programmable microcontroller and SBC maker Arduino, drawing intense fire from the maker community for grabbing additional rights to user-generated content on its platform and prohibiting reverse-engineering of what was once very open software.…


  • Pentagon pumps $29.9M into bid to turn waste into critical minerals
    It's unclear how much scandium and gallium ElementUSA will contribute to the supply chain, or when
    The US Department of Defense is asserting its desire to be an integral part of the American rare earths and critical minerals supply chain with a deal to establish a domestic pipeline of gallium and scandium production.…



Page last modified on November 02, 2011, at 09:59 PM