Framework Blog RSS feed for Framework Blog https://frame.work/ https://frame.work/blog/updates-on-memory-pricing-and-navigating-the-volatile-memory-market Navigating the volatile silicon market: updates on memory and storage pricing https://frame.work/blog/updates-on-memory-pricing-and-navigating-the-volatile-memory-market <p><strong>Updated on December 17th, 2025</strong></p> <p>Trailing behind shortages and price increases from suppliers on memory, we’re also seeing costs of storage <a href="https://pcpartpicker.com/trends/price/internal-hard-drive/" target="_blank" rel="noopener">increase rapidly in recent weeks</a>. Our suppliers indicate that pricing will continue to increase in early 2026 and likely beyond. Like with memory, our recent pricing on storage has been both below the market pricing for these modules and below the costs at which we can purchase new modules from suppliers. With that, we have now updated pricing on storage to reflect our new purchase prices from suppliers. We’re following the same process that we are with memory, where we will keep the original prices on all existing pre-orders, will update this post each time we update prices, will limit price increases to only cover increases in costs, and will bring pricing back down when costs decrease in the future.<br></p> <p><strong>Original blog post</strong></p> <p>Today, we increased our pricing on the DDR5 memory configurable in Framework Laptop DIY Edition orders by 50% to begin to respond to the substantially higher costs we are facing from suppliers and distributors. The new pricing remains below what is available <a href="https://pcpartpicker.com/trends/price/memory/" target="_blank" rel="noopener">in the open market</a>. We aren’t changing pricing on any existing pre-orders, and we also are not yet updating pricing on our pre-built laptops or Framework Desktop which come with memory (this makes the <a href="https://frame.work/products/desktop-diy-amd-aimax300/configuration/new" target="_blank" rel="noopener">128GB config of Framework Desktop</a> a bargain).  As always, we also offer the option to buy a DIY Edition laptop with no memory or storage included, letting you re-use modules you have or find deals where you can.</p> <p>The memory market is currently extremely volatile and we expect costs from our suppliers to continue to increase over the next weeks and months. It is highly likely that we will need to make further price updates on both DDR5 modules and on our systems that come with memory, whether DDR5, LPDDR5X, or GDDR. Like we did <a href="https://frame.work/blog/tariff-driven-price-and-availability-changes-for-us-customers" target="_blank" rel="noopener">during the fluctuating tariff environment</a> earlier in 2025, we commit to three principles throughout this:</p> <ol><li>We are going to stay transparent. Any time we change memory or system pricing, we are going to let you know and explain the reasoning behind it.</li><li>We won’t use this as an excuse to be extractive. We’ll only increase pricing to cover increases in our costs, and where possible, we’ll absorb costs to maintain stability in the pricing we put in front of you.</li><li>Just like we did with tariffs, when our costs go back down in the future, we’ll reduce our pricing and update this blog post to reflect the change.</li></ol> <p>For more context on what is driving the cost increases throughout the industry, there is currently a massive supply and demand imbalance for memory. On the demand side, the boom in AI data center construction and server manufacturing is consuming immense amounts of memory. A single rack of NVIDIA’s GB300 solution uses 20TB of HBM3E and 17TB of LPDDR5X. That’s enough LPDDR5x for a thousand laptops, and an AI-focused datacenter is loaded with thousands of these racks! On the supply side, the memory industry since its inception decades ago has gone through repeated boom and bust cycles, making the three main surviving memory die makers Micron, SK Hynix, and Samsung hesitant to speculatively invest the billions of dollars needed for fabrication capacity expansion. Now that the demand exists again, there is a years-long lag time to catch up on supply. Worse for us in the PC space though, both the existing capacity and the new capacity is being prioritized to higher-margin server-focused memory like HBM and the server markets for DDR5 and LPDDR5X over the PC market.</p> <p>We have strong partnerships with Micron (one of the biggest manufacturers of both memory dies and modules), memory module makers like ADATA who source from all three of the big memory die suppliers, and memory distributors, and our DIY Edition model gives us a lot of flexibility to navigate constrained and rapidly changing environments like this. We’ll continue to keep you informed throughout, and we’ll do everything we can to keep memory available to you.</p> <p>Note: Because our current memory pricing is substantially below market, we are adjusting our return policy to prevent scalpers from purchasing DIY Edition laptops with memory and returning the laptop while keeping the memory. Laptop returns will also require the memory from the order to be returned.</p> Wed, 17 Dec 2025 20:10:15 +0000 https://frame.work/blog/press-reviews-for-the-new-framework-laptop-16-are-live Press reviews for the new Framework Laptop 16 are live! https://frame.work/blog/press-reviews-for-the-new-framework-laptop-16-are-live <p>We’ve just hit an exciting milestone: the first press reviews for the new <a href="https://frame.work/laptop16" target="_blank" rel="noopener">Framework Laptop 16</a> are live, and we’re starting to ship the first batch of pre-orders next week. <a href="https://frame.work/products/laptop16-diy-amd-ai300/configuration/new" target="_blank" rel="noopener">Pre-orders are still open</a>, with the current open batch shipping in December.</p> <p>When we first introduced Framework Laptop 16 two years ago, we set an extremely ambitious product target: bringing the depth of modularity and upgradeability available in desktop PCs into a thin, high-performance laptop. This included the path that many of you wanted most out of a notebook: upgradeable discrete graphics. Reviewers are pleased to see that we’ve delivered on the promise of graphics upgradeability, bringing <strong>NVIDIA graphics</strong> into the Framework ecosystem for the first time with the latest-generation RTX 5070 GPU. We paired that with a new Mainboard built on <strong>AMD Ryzen 300 Series </strong>processors and introduced one of the first <strong>240W USB-C power adapters</strong> on the market, both of which got positive callouts in the reviews too. A few reviewers swapped the new modules into their original Framework Laptop 16’s and got to experience deep upgradeability first-hand.</p> <p>Here are a few of our favorite quotes from the early reviews:<br><br>&quot;<strong>But what I really love about the Framework Laptop 16 is that if you already bought one, you don&#39;t have to junk it to upgrade—you can just buy the newer, better components direct from Framework and install them yourself!</strong></p> <p><strong>That feature alone makes the Framework Laptop 16 my favorite laptop of the year, and our lab testing proves that upgrades like the new Nvidia GeForce RTX 5070 GPU module make a big difference in gaming performance.&quot;</strong></p> <p>– <a href="https://www.tomsguide.com/computing/laptops/framework-laptop-16-2025-review" target="_blank" rel="noopener">Tom&#39;s Guide</a></p> <p><strong>&quot;The Framework Laptop 16 offers users a high-performance, fully-repairable desktop-quality laptop with support up to four external displays. I highly recommend it to developers and enthusiasts who run Linux and tinker with upgradeable modules, or anyone who wants a sustainable laptop to escape the technology upgrade rat race.&quot;</strong></p> <p>– <a href="https://www.zdnet.com/article/ready-for-a-diy-laptop-i-assembled-frameworks-new-pc-in-less-than-30-minutes/" target="_blank" rel="noopener">ZDNET</a></p> <p><strong>&quot;Simply put, the RTX 5070 attachment placed the Framework Laptop 16 in its own class among this group. If you’re looking for graphics power, this is a legitimate option, not just a modest boost. It outperformed the more traditional RTX 5070 implementation in the Dell 16 Premium, and it smoked the rest of the competition, including the MacBook Pro.&quot;</strong></p> <p>– <a href="https://www.pcmag.com/reviews/framework-laptop-16-2025" target="_blank" rel="noopener">PCMag</a></p> <p>Part of upgradeability is making sure that the modules being swapped out don’t go to waste. Alongside the new generation Mainboard launch, we’ve released a <a href="https://github.com/FrameworkComputer/Framework-Laptop-16/tree/main/Case" target="_blank" rel="noopener">3D-printable case</a> that you can drop your original Mainboard into to run it as a standalone mini-PC. We’re also working on ways to enable Graphics Module re-use, and we’ll have more to share on that early next year. If you haven’t seen it yet, we have one other 3D-printable item for Framework Laptop 16 that’s been popular in the community: a <a href="https://github.com/FrameworkComputer/Framework-Laptop-16/tree/main/Touchpad" target="_blank" rel="noopener">one-piece touchpad</a> row to replace the default three-piece version.</p> <p>We had a bit of extra fun with the review units this time. We partnered with a UV-printing service, Printeers, to custom-print artwork on the Spacers that are part of the hot-swappable Input Module system. We made extras of these, and we’re running a community giveaway of them for current Framework Laptop 16 owners. You can participate in the community thread <a href="https://frameworkcomputer.typeform.com/to/w9Kcj697" target="_blank" rel="noopener">here</a>. <em>The giveaway is now closed. Winners have been contacted by email provided in their entry. Thank you for participating!</em></p> <p>We have a few other announcements to share. When we launched the 61Wh battery for Framework Laptop 13, we kept the original 55Wh version in some configurations to consume remaining material in the supply base. That material is now all utilized, so we’ve introduced a <a href="https://frame.work/products/laptop-diy-13-gen-amd/configuration/new" target="_blank" rel="noopener">new entry-level Framework Laptop 13</a> (Ryzen 7040 Series) configuration that brings in the 61Wh battery and 2nd Gen Webcam. Finally, we’re also close to completing pre-orders of the <a href="https://frame.work/products/desktop-diy-amd-aimax300/configuration/new" target="_blank" rel="noopener">128GB configuration of Framework Desktop</a> and will be able to bring it in-stock soon alongside the already in-stock 32GB and 64GB configs.</p> Wed, 19 Nov 2025 23:40:07 +0000 https://frame.work/blog/framework-sponsorships Framework sponsorships https://frame.work/blog/framework-sponsorships <p><strong>Updated at 1:30pm PT December 3rd</strong></p> <p>In October, we shared the open source organization sponsorships we made to date in 2025.  We’re now happy to share the latest batch of donations, bringing our total for the year to over $225,000.  We focus on supporting projects that create the open source software that makes our products work, and the latest group includes a number of incredible distros.</p> <p>First, we’re sponsoring <a href="https://archlinux.org/" target="_blank" rel="noopener">Arch Linux</a>, which is consistently in the top three most popular distros for Framework Laptop and Framework Desktop owners.  This is an awesome power user choice with a very broad community and a deep knowledgebase in its <a href="https://wiki.archlinux.org/title/Framework_Laptop_13" target="_blank" rel="noopener">wiki</a>.</p> <p>Downstream of that, we’re also supporting <a href="https://cachyos.org/" target="_blank" rel="noopener">CachyOS</a>, which is an Arch Linux based distro specifically focused on performance.  CachyOS consistently comes up on top in performance benchmarks, like this <a href="https://www.phoronix.com/review/cachyos-ubuntu-2510-f43" target="_blank" rel="noopener">recent one done by Phoronix</a> on a Framework Desktop.</p> <p>Next up, we have <a href="https://www.debian.org/" target="_blank" rel="noopener">Debian</a>, which is actually the first distro I used when I started my Linux journey in 2003.  This is one of the more popular choices for Framework Laptop users, and it’s also the distro that Ubuntu, Linux Mint, and many other major distros are rooted in.</p> <p>Next, we’re supporting <a href="https://bazzite.gg/" target="_blank" rel="noopener">Bazzite</a>, which is one of the fastest growing Linux distros, and a very popular choice for Framework Desktop users interested in gaming.  Bazzite is some of the clearest proof that the year of the Linux Desktop is here, enabling a range of games platforms with ease of use that relatively recently required Windows.</p> <p>The last Linux distro in this batch of sponsorships is <a href="https://nixos.org/" target="_blank" rel="noopener">NixOS</a>, which is a <a href="https://frame.work/linux" target="_blank" rel="noopener">community supported distro</a> across each of our products.  In addition to being popular in the Framework Community, NixOS is also the distro of choice for one of our firmware engineers, making it one of the first distros we test new hardware with.</p> <p>Finally, we’re sponsoring both <a href="https://www.freebsd.org/" target="_blank" rel="noopener">FreeBSD</a> and <a href="https://www.netbsd.org/" target="_blank" rel="noopener">NetBSD</a>.  We’ve been working with FreeBSD throughout the year as part of their <a href="https://github.com/FreeBSDFoundation/proj-laptop/" target="_blank" rel="noopener">Laptop Support and Usability Project</a>, and we’re happy to support the foundation with funding as well.  While NetBSD doesn’t work on our products yet (as far as we know), we love their mission of extreme portability and keeping older computers from turning into e-waste.</p> <p>You can see our full set of 2025 sponsorships in the table below.  We have a few more currently in progress that we’ll announce in the next batch.  As always, <a href="https://frameworkcomputer.typeform.com/to/VyQ09s0U" target="_blank" rel="noopener">let us know</a> of any other organizations you believe we should sponsor!</p> <iframe title="Framework Sponsorships" height="700" width="800" src="https://docs.google.com/spreadsheets/d/e/2PACX-1vRfwvqJ3DFALL2S7leEp12Iz3JZvJGqlBpPEj0Ug4tHOH5MmGkkdakegAo5IOWpTMLb-baf20HzzqkR/pubhtml?gid=0&single=true&amp;range=A1:C30&amp;single=true&amp;widget=false&amp;chrome=false&amp;headers=false"> </iframe> <p><strong>Original blog post</strong></p> <p>Both to get Framework products and our mission in front of more people and to support organizations that are working to scale people-and-planet-friendly hardware and open source software to more of the world, we make a number of sponsorships each year. This is in the form of both monetary donations and product donations. The list below covers our sponsorships since the start of 2025, and we’ll continue to keep this up to date over time. Note that this list does not include products sent for marketing use (e.g. press units and marketing activations at events) or R&amp;D use (e.g. pre-release units sent under NDA or production units sent to open source software developers and maintainers at Linux distros and other open source software organizations and hardware developers in the Framework community).</p> <p>We’re sharing this not just for visibility, but also because we want your help in identifying other organizations we can sponsor to help support open source software and hardware development among a broader base of developers and makers and to amplify our mission. If you have recommendations, please let us know by nominating the organization through <a href="https://frameworkcomputer.typeform.com/to/VyQ09s0U" target="_blank" rel="noopener">this form</a>. We can’t promise that we’ll be able to fund each one, but we will explore every nomination.</p> <iframe title="Framework Sponsorships" height="600" width="800" src="https://docs.google.com/spreadsheets/d/e/2PACX-1vRLkyuuXvjHdG6coHNuSvk5OS1qdlW3CNUNlCdrJKNBRTc84znZfZJZLu1ePI_kBS7pcOCRdVNiKUWE/pubhtml?gid=0&single=true&amp;range=A1:C23&amp;single=true&amp;widget=false&amp;chrome=false&amp;headers=false"> </iframe> Wed, 03 Dec 2025 21:55:18 +0000 https://frame.work/blog/extending-on-framework-desktop-stylus-availability-and-a-roundup-of-gaming-performance Extending on Framework Desktop, Stylus availability, and a roundup of gaming performance https://frame.work/blog/extending-on-framework-desktop-stylus-availability-and-a-roundup-of-gaming-performance <p>We built Framework Desktop to be a tiny, use-case-flexible powerhouse. It’s a small form factor PC that is easy to set up, repair, and modify, while also carrying an immensely powerful AMD Ryzen AI Max processor inside. The high memory capacity, wide memory bandwidth, and large on-package GPU make it excel for local AI workloads and general productivity, but they also enable another use case we’re seeing a ton of excitement around: gaming.<br><br>While we’ve talked a lot about the Framework Desktop&#39;s gaming potential, it’s been exciting to see reviewers take it even further with their own deep dives. <a href="https://www.youtube.com/watch?v=HEsYsi3lmV0&ab_channel=ShortCircuit" target="_blank" rel="noopener">ShortCircuit</a> tested games like Cyberpunk 2077 and Alan Wake 2, reporting solid 1080p performance and outpacing a desktop RTX 4060. <a href="https://www.tomsguide.com/computing/mini-pcs/forget-consoles-i-spent-a-week-with-this-mini-pc-in-my-living-room-and-i-cant-believe-how-well-it-performs" target="_blank" rel="noopener">Tom’s Guide</a> tested Final Fantasy VII Rebirth at 4K with Radeon Super Resolution and came away impressed, even using the system as a living room console replacement. <a href="https://www.pcworld.com/article/2866400/framework-desktop-review.html" target="_blank" rel="noopener">PCWorld</a> tried competitive titles like Call of Duty: Black Ops 6, and <a href="https://www.wired.com/review/framework-desktop" target="_blank" rel="noopener">Wired</a> tested a few games like Marvel Rivals and Cyberpunk 2077. <a href="https://www.youtube.com/watch?v=sUje8zzMUI8&ab_channel=ETAPRIME" target="_blank" rel="noopener">ETA PRIME</a> used Bazzite on their Framework Desktop unit to transform their PC gaming setup into a console-like experience, testing out Borderlands 3, Spider-Man 2, and The Witcher 3. <a href="https://forum.level1techs.com/t/framework-desktop-128gb-ai-max-395-benchmarks/234790" target="_blank" rel="noopener">Level1Techs</a> benchmarked different titles like Final Fantasy XIV: DawnTrail with good results, which matches what we’ve seen internally: games that are well-optimized or support resolution scaling tend to perform especially well.<br><br>If you’ve been gaming on Framework Desktop, we’d love to see your setup. Share it with us in the <a href="https://community.frame.work/c/desktop/203" target="_blank" rel="noopener">Framework Community</a>, and if there are particular games you want us to test or settings you’d like to see us document, let us know. We’ll keep building based on what you’re excited about.</p> <p><strong>New Framework Desktop accessories available</strong></p> <p>We’ve been working on a number of updates to Framework Desktop that we’re excited to share. We’re adding new parts to the ecosystem to support a range of different use cases, so you can make the system the best fit for your needs.<br><br>For <a href="https://www.youtube.com/watch?v=N5xhOqlvRh4&ab_channel=JeffGeerling" target="_blank" rel="noopener">home lab builders and cluster enthusiasts</a>, the new DeskPi RackMate 10-inch 2U Mini-ITX Shelf is <a href="https://frame.work/products/deskpi-rackmate-10-inch-2u-mini-itx-shelf" target="_blank" rel="noopener">now available in the Marketplace</a>. It’s a 2U half-width-rack metal tray designed for the Framework Desktop Mainboard and Power Supply, fitting cleanly into 10” rack systems like the DeskPi RackMate T1. It’s a great foundation for building scalable, rack-mounted systems with the same modular principles we’ve built everything else around. Because it supports Mini-ITX and Flex ATX standards, you can use it with other off-the-shelf parts too!<br></p> <p>The Framework Desktop Handle<a href="https://frame.work/products/framework-desktop-handle" target="_blank" rel="noopener"><strong>​</strong></a> is also <a href="https://frame.work/products/framework-desktop-handle" target="_blank" rel="noopener">now available in the Marketplace</a> and the Framework Desktop configurator. This is a fun addition that makes your Framework Desktop easier to carry, whether you&#39;re heading to a LAN party, bringing your setup to an event, or just moving it in between your living room and your home office. If you have a pending pre-order, you can now modify your existing pre-order to add the handle. If you have a Framework Desktop already, it is also available separately in the Marketplace.<br><br><strong>Framework Laptop 12 Stylus now available</strong></p> <p>The Framework Laptop 12 Stylus<a href="https://frame.work/products/laptop12-stylus?v=FRAPBR000F" target="_blank" rel="noopener">​</a> is now available both <a href="https://frame.work/products/laptop12-stylus?v=FRAPBR000F" target="_blank" rel="noopener">in the Marketplace</a> and in the Framework Laptop 12 configurator. If you already have a pending Laptop 12 pre-order, you can now modify it to include the Stylus. It’s been exciting to see the module move from engineering samples into high volume manufacturing, with all of the refinements we’ve made along the way built in. Our focus has been on making sure the Stylus integrates cleanly with the Framework Laptop 12 ecosystem while keeping with our core goals of modularity and repairability. The Stylus is color matched to the five Framework Laptop 12 colorways and has both a replaceable tip and USB-C chargeable, replaceable battery.<br></p> <p><strong>Getting to in-stock</strong></p> <p>And lastly, pre-orders for Framework Laptop 12 and Framework Desktop (Max 385 - 32GB and Max+ 395 - 64GB configurations) are wrapping up. Once we transition to in-stock availability, orders for these configurations will ship out within a few days of being placed. We’re about halfway through pre-orders for the Framework Desktop Max+ 395 - 128GB configuration, and we’re continuing to move quickly to process and ship each of those as well.</p> <p><br>We’re incredibly excited about how these new additions continue to grow the Framework ecosystem, and we’re looking forward to hearing what you think and what else you’d like to see in the Marketplace.</p> Mon, 29 Sep 2025 22:16:40 +0000 https://frame.work/blog/introducing-the-new-framework-laptop-16-with-nvidia Introducing the new Framework Laptop 16 with NVIDIA® GeForce RTX™ 5070 https://frame.work/blog/introducing-the-new-framework-laptop-16-with-nvidia <p>We made a lot of major product announcements throughout the year, and we have one more big one for you today. We’re excited to announce the new Framework Laptop 16, now with AMD Ryzen™ AI 300 Series processors and a graphics upgrade to NVIDIA® GeForce RTX™ 5070 Laptop GPU! <a href="https://frame.work/laptop16" target="_blank" rel="noopener">Pre-orders are open now</a> starting at $1,499 USD, with first shipments this November. We first introduced Framework Laptop 16 in 2023 as a high-performance, desktop-replacement 16” laptop that carried in not only our usual repairability and upgradeability, but two bold new systems: fully customizable input and generational upgradeability of graphics. On the latter, especially since so many other laptop brands have failed at it, we knew that the only way we could prove upgradeability is by actually delivering an upgrade. We’ve spent the last two years working with the teams at AMD, NVIDIA, and Compal to not only make a new NVIDIA-powered Graphics Module, but also make it fully backwards compatible with the original Framework Laptop 16. That means any current owner can pick up the new module and get the latest generation of graphics!</p> <p>This is a huge leap in performance and capability. The GeForce RTX 5070 Laptop GPU brings NVIDIA’s latest Blackwell architecture with 8GB of GDDR7 and delivers a 30-40% increase in gaming framerates over our original Radeon RX 7700S Graphics Module. We made a couple of other improvements too. The GeForce RTX 5070 Laptop GPU now enables display output and power input over the rear USB-C port. We also revamped the thermal system, switching to Honeywell phase change thermal interface material and reoptimizing the fan blade geometry and controller IC for reduced noise while supporting 100W sustained TGP. The discrete GPU in the Graphics Module can send a display signal directly to the internal laptop display through a mux on the Mainboard, and we’ve updated our 165Hz 2560x1600 panel to support NVIDIA G-SYNC®. We’re also keeping the Radeon RX 7700S Graphics Module available as a configuration option with the updated thermal system for all of you who may prefer AMD offerings, especially for the maturity of their open-source Linux drivers.</p> <p>Going into the rest of the updates on Framework Laptop 16, we now offer the latest generation Ryzen™ AI 300 Series processors in 8-core AMD Ryzen™ AI 7 350 and 12-core AMD Ryzen™ AI 9 HX 370 options, both running at 45W sustained TDP. Both have highly capable integrated graphics if you’d like to use your Framework Laptop 16 with the Expansion Bay Shell instead of a Graphics Module. We’ve also updated the Mainboard design to support four simultaneous display outputs over the rear four Expansion Card slots. We of course kept memory and storage upgradeability, with two slots of DDR5-5600 supporting up to 96GB and two M.2 slots for up to 10TB.</p> <p>To support all of this combined GPU, CPU, and system performance, we’re excited to announce our new default power adapter for Framework Laptop 16: an ultra-high-power-density compact 240W USB-C adapter supporting the USB-PD 3.1 spec. We were the first laptop maker to ship a USB-C 180W adapter with the original Framework Laptop 16, and somehow nearly two years later, we may be the first to ship with 240W too! This added power means you can run the system at sustained full load without draining the battery.</p> <p>We have a handful of other refinements too. We’re now using the 2nd Gen Webcam that we first introduced last year on Framework Laptop 13. We’ve reoptimized the geometry of the CNC aluminum Top Cover to increase rigidity. We’ve also updated the modular keyboards in two ways. First, we’ve adjusted the firmware behavior to prevent the system from waking if keys are triggered while the lid is closed. That change is also coming soon as a firmware update for all current Framework Laptop 16 keyboards. Second, we’ve brought in the new keyboard artwork from Framework Laptop 12 and 13, meaning most keyboard options have no Windows logo, for all of the Linux users out there. We also have one keyboard option with a Copilot logo in case that’s something you want.</p> <p>We spent the last two years digging into customer and press feedback on Framework Laptop 16 and finding every way we could to improve it. We go more into the product and development process in the <a href="https://www.youtube.com/watch?v=OZRG7Og61mw" target="_blank" rel="noopener">launch video</a> we posted today on our YouTube channel. We also <a href="https://youtu.be/0RzUBqtgODM" target="_blank" rel="noopener">shared a video</a> digging into some of the ideas and prototypes we explored but couldn’t land this generation. If you have questions on either of these or any other part of Framework Laptop 16, we’re hosting a livestream on <a href="https://www.youtube.com/@FrameworkComputer" target="_blank" rel="noopener">YouTube</a> and <a href="https://www.twitch.tv/framework" target="_blank" rel="noopener">Twitch</a> at 8:45 PT on Aug 26th. You can also try our full set of new products hands-on at PAX West in Seattle from Aug 29 to Sept 1 and Rails World in Amsterdam from Sept 4 to 5. You can check out all of our upcoming events <a href="https://community.frame.work/t/2025-framework-events/74112" target="_blank" rel="noopener">here</a>.</p> <p>In addition to launching the new Framework Laptop 16 today, we’re reducing the pricing on the original generation, now starting at $1,299 USD. We have limited quantities of the Ryzen 9 configurations remaining, but will keep the Ryzen 7 versions in production and available as a lower cost entry point to Framework Laptop 16.</p> <p>As always, Framework Laptop 16 is available both pre-configured with Windows 11 and as a DIY Edition that you can assemble yourself, bringing your own memory, storage, and operating system, including Linux. Pre-orders are open now on the systems, the <a href="https://frame.work/products/laptop16-graphics-module-nvidia-geforce-rtx-5070" target="_blank" rel="noopener">GeForce RTX 5070 Graphics Module</a>, the <a href="https://frame.work/products/laptop16-mainboard-amd-ai300?v=FRAKKE0007" target="_blank" rel="noopener">Ryzen™ AI 300 Series-powered Mainboards</a>, and the new <a href="https://frame.work/products/power-adapter-240w" target="_blank" rel="noopener">240W Power Adapter</a>. We’re excited to see what you think of the new Framework Laptop 16!</p> Tue, 26 Aug 2025 15:22:25 +0000 https://frame.work/blog/we-have-something-big-coming We have something big coming https://frame.work/blog/we-have-something-big-coming <p>It’s been a very busy year at Framework, and we’re not done yet! We launched a new Framework Laptop 13 and two new product categories with Framework Laptop 12 and Framework Desktop. We’ve got one more big update for you, and you can tune into our YouTube channel August 26th at 8am PT to see what it is!</p> <p><a href="https://youtu.be/OZRG7Og61mw" target="_blank" rel="noopener"><strong>Get notified</strong></a></p> <p>This is the first time we’ve done this kind of YouTube-first launch, and we’re excited to see what you think of it. Our video content over the last year has been mostly launch focused, but you’ll be seeing a lot more soon across both <a href="https://www.youtube.com/@frameworkcomputer" target="_blank" rel="noopener">YouTube</a> and <a href="https://www.twitch.tv/framework" target="_blank" rel="noopener">Twitch</a>. You can subscribe to each to get notified when we go live or post something new.</p> <p>We continue to get awesome feedback on Framework Desktop, with additional reviews going live and the first orders reaching Batch 1 customers. ETA PRIME shared an excellent <a href="https://www.youtube.com/watch?v=sUje8zzMUI8" target="_blank" rel="noopener">video on using Bazzite</a> to make it a killer gaming system. Luke Miani did a <a href="https://www.youtube.com/watch?v=q7Bfq6DgiWI" target="_blank" rel="noopener">head-to-head</a> against Mac Mini and Mac Studio. Boiling Steam wrote a <a href="https://boilingsteam.com/framework-desktop-hands-on-first-impressions/" target="_blank" rel="noopener">great overview of building a Fedora 42 workstation</a> starting from a Framework Desktop Mainboard.</p> <p>Our factory is fully cranked up and outputting both Framework Laptop 12 and Framework Desktop systems as quickly as possible to fulfill all of the pre-order batches. July was a record high month for manufacturing volume for us, and we hope to beat that again in August! If you’d like to help us on this (and other parts of remaking consumer electronics), we’re growing the team too. Check out our <a href="http://jobs.frame.work" target="_blank" rel="noopener">careers page</a> and let us know if you know anyone amazing!</p> Tue, 19 Aug 2025 15:26:03 +0000 https://frame.work/blog/framework-desktop-press-reviews-are-live Framework Desktop press reviews are live! https://frame.work/blog/framework-desktop-press-reviews-are-live <p>Press reviews of <a href="https://frame.work/desktop" target="_blank" rel="noopener">Framework Desktop</a> are now live, and we’re starting shipments of Batch 1 pre-orders next week! This was the largest set of press units we’ve ever sent out for a product launch, both because so many reviewers wanted to try it out and because we wanted to show just how incredibly capable Ryzen AI Max is across a range of use cases. The reviews and videos posted today cover gaming, DIY PC building, machine learning, homelab, Linux workstation, and general PC productivity scenarios. Reviewers called out the multi-core performance, the workloads that 128GB of memory can enable, how quiet the system is both at idle and under load, and surprise at just how tiny it is. Here are some of our favorite highlights:</p> <p><br>&quot;<strong>Framework did good with this one. AMD really blew it out of the water with the 395+. We&#39;re spoiled to have such incredible hardware available for Linux at such appealing discounts over similar stuff from Cupertino. What a great time to love open source software and tinker-friendly hardware!&quot;</strong></p> <p>– <a href="https://world.hey.com/dhh/the-framework-desktop-is-a-beast-636fb4ff" target="_blank" rel="noopener">DHH</a></p> <p><strong>&quot;This is exactly the kind of setup that I want personally for personal AI. Think Poe from Altered Carbon meets Home Assistant, without ridiculous heat and power requirements.&quot;</strong></p> <p>– <a href="https://www.youtube.com/watch?v=ziZDzrDI7AM" target="_blank" rel="noopener">Level1Linux</a></p> <p><strong>&quot;I understand why companies are marketing this APU as an AI gaming box. Because at high settings with 1440p and FSR 3.0 set to balanced, I never dropped under 60 fps... It did all this while consuming 100 watts under load and never going above 60C. That&#39;s insane.&quot;</strong></p> <p>– <a href="https://www.youtube.com/watch?v=D7BehyQVVbU" target="_blank" rel="noopener">Salem Techsperts</a></p> <p>We have a large number of pre-orders<a href="https://frame.work/products/desktop-diy-amd-aimax300/configuration/new" target="_blank" rel="noopener">​</a> of both the <a href="https://frame.work/products/desktop-diy-amd-aimax300/configuration/new" target="_blank" rel="noopener">system</a> and <a href="https://frame.work/products/framework-desktop-mainboard-amd-ryzen-ai-max-300-series?v=FRAFMK0002" target="_blank" rel="noopener">Mainboards</a> that we&#39;re working through, and we’re ramping production as quickly as we can. Pre-orders are still open, with the current open batch shipping in Q4. We’re continuing to build out both the documentation and the ecosystem around the product. We now carry our favorite wireless gamepad, the <a href="https://frame.work/products/8bitdo-ultimate-2c-wireless-controller?v=FRANZA0003" target="_blank" rel="noopener">8BitDo Ultimate 2C</a> in the Framework Marketplace in US and Canada for those of you who want to bring your Framework Desktop into the living room. We’re also writing more guides around the machine learning use case, starting with a <a href="https://frame.work/blog/using-a-framework-desktop-for-local-ai" target="_blank" rel="noopener">getting started guide on using LM Studio for local AI</a>. This is a topic that is evolving quickly. OpenAI launched their new gpt-oss-120b model just this week, and <a href="https://x.com/FrameworkPuter/status/1952854105606766922" target="_blank" rel="noopener">it works out of the box</a> on Framework Desktop too!</p> <p>If you want to get hands on with Framework Desktop and the rest of our product line-up, we’ll be at a number of events across the US, Europe, and Taiwan over the next few months. Stop by our booths, pick up some stickers, and say hi to the team!</p> <ul><li>Quakecon - August 7 - 10 in Grapevine, TX</li><li>COSCUP - August 9 - 10 in Taipei, TW</li><li>Open Source Summit Europe - August 25 - 27 in Amsterdam, NL</li><li>PAX West - August 29 - September 1 in Seattle, WA</li><li>Rails World - September 4 - 5 in Amsterdam, NL</li><li>Maker Faire Bay Area - September 26 - 28 in Vallejo, CA</li><li>Texas Linux Festival - October 3 - 4 in Austin, TX</li><li>TwitchCon - October 17 - 19 in San Diego, CA</li><li>Hackaday Supercon - October 31 - November 2 in Pasadena, CA</li></ul> Thu, 07 Aug 2025 18:09:54 +0000 https://frame.work/blog/using-a-framework-desktop-for-local-ai Using a Framework Desktop for local AI https://frame.work/blog/using-a-framework-desktop-for-local-ai <p><strong>Updated on December 2nd, 2025</strong></p> <p>We know a lot of you are exploring <a href="https://frame.work/desktop?tab=machine-learning" target="_blank" rel="noopener">Framework Desktop to crunch machine learning</a> and local AI inference workloads right on your desk. This is a topic that goes deep, and we’re going to build out a series of guides and videos to help you get started. In this first one, we’ll go through the basics of getting started in the easiest way possible with local Large Language Models (LLMs) on Windows. In future guides, we’ll go deeper into code generation, image/video generation, running models on Linux, and clustering multiple Framework Desktops to handle massive models.</p> <h2>Why build a local AI PC?</h2> <p>First, you may be wondering why you’d even want to run AI locally, given how many cloud-based services and applications there are. One of the main reasons is the privacy you get from being able to keep all of your data local. Beyond that, you also have deeper control over model selection and modification, including being able to download, modify, and run uncensored models. If you’re running AI constantly and heavily, you may also save money running locally rather than paying for cloud time. Finally, because you’re running AI locally, you can also run it fully offline, making it useful off grid or as a backup source of knowledge when infrastructure is down.</p> <p>Traditionally, one of the big challenges with running AI inference locally is being able to run large models. Larger models in general have a deeper set of knowledge to draw from, but require substantial amounts of memory. Consumer graphics cards have plenty of compute and memory bandwidth to crunch AI inference, but are typically limited in memory capacity to 8, 16, or 24GB. The Ryzen AI Max in Framework Desktop has configurations with up to 128GB of memory, allowing much larger models. We’ll get deeper into model selection and tradeoffs later in this guide.<br></p> <h2>Getting started with the basics of local LLMs</h2> <p>There are a huge number of applications and toolkits available for running AI models locally. The simplest application we’ve found to get started with for the text and code generation AI use case is <a href="https://lmstudio.ai/" target="_blank" rel="noopener">LM Studio</a>. It’s built on top of <a href="https://github.com/ggml-org/llama.cpp" target="_blank" rel="noopener">llama.cpp</a>, which is an extremely powerful and extensible open source inference library that we’ll go into in a future guide. LM Studio packages it up in a user-friendly application that runs on both Windows and Linux. With this guide, we’ll focus on Windows, but the same settings will work on Linux. Note that as of December 2025, inference runs about 20% faster on Fedora 43 than on Windows 11.</p> <p>Once you download, install, and open LM Studio, you’ll be presented with a startup screen that asks you to choose your level (we recommend the default Power User) and then if you’d like to get started with your first LLM. LM Studio is good at keeping up to date with the latest models, so it’s usually reasonable to start by downloading their recommended one, and then clicking Start New Chat.</p> <p class="block-img"><img src="https://images.prismic.io/frameworkmarketplace/aTCgjHNYClf9nyMa_Screenshot2025-11-26140422.png?auto=format,compress" alt="Download screen" width="1800" height="1140" /></p> <p>If you’ve installed your Framework Desktop Driver Bundle, LM Studio should already detect your GPU AI acceleration capabilities and enable the relevant runtime for it. Before loading and running the model, you should make sure that LM Studio is fully offloading the model onto the GPU. Click on the “Select a model to load” dropdown at the top, toggle “Manually choose model load parameters”, and click on the arrow next to the model you’d like to configure.</p> <p class="block-img"><img src="https://images.prismic.io/frameworkmarketplace/aTCg3XNYClf9nyMd_Screenshot2025-11-26141403.png?auto=format,compress" alt="Select a model to load" width="1800" height="1140" /></p> <p>Slide the GPU Offload slider to the maximum number if it isn’t already, toggle the “Remember settings” selection, and click “Load Model”. You can then type into the chat box and start chatting with the LLM locally! Note that you can also do things like attach text or pdf files for analysis, and for “vision” models, images too.</p> <p class="block-img"><img src="https://images.prismic.io/frameworkmarketplace/aTChDHNYClf9nyMf_Screenshot2025-11-26141432.png?auto=format,compress" alt="Remember settings selection" width="1800" height="1140" /></p> <h2>Selecting AI models to run locally</h2> <p>Where running AI locally really gets interesting is in the breadth of models that are available. LM Studio has a convenient feature to search for and download models from <a href="https://huggingface.co/models" target="_blank" rel="noopener">Hugging Face</a>, which is a large community around AI models and data. If you click on the magnifying glass icon in the left sidebar, that takes you to the Discover tab. The default list is a set of recommended models from the LM Studio team that are usually excellent choices, but you can also search beyond that. Let’s pick a few example models that are optimized for different tasks.</p> <p>To pick an example to understand how to select and download a model, let’s pick Mistral Small 3.2, which is a 24B open-weights model from MistralAI. 24B indicates that it’s a model that contains 24 billion parameters. The larger a model is (the more parameters), in general, the smarter it can be. However, a larger parameter count means both that it needs more memory to load and that it will run slower, since each token of text generation needs to be processed through the entire set of parameters. Getting to 10 tokens per second (tok/s) or higher of output speed is a good target, since it means the model will be generating text at least as quickly as you can read it. One way to increase speed is through quantization, which represents a model in a smaller number of bits per parameter while slightly reducing accuracy. In general, you can run models at Q6 (6-bit quantization) without noticeable degradation. To download the Q6_K version of Mistral Small 3.2, select it from the dropdown and download it.</p> <p class="block-img"><img src="https://images.prismic.io/frameworkmarketplace/aF8VU3fc4bHWizXl_004.png?auto=format,compress" alt="MistralAI" width="1123" height="917" /></p> <p>Going back to the chat tab, you can then unload any previously loaded model and load Mistral Small 3.2, making sure to adjust the settings to use full GPU Offload. A Framework Desktop will currently run this model at just around 10 tok/s (12 tok/s on Linux).</p> <p>Beyond LM Studio’s staff picks like Mistral Small 3.2, you can browse around Hugging Face for models to download and run. The community-created <a href="https://huggingface.co/spaces/OpenEvals/find-a-leaderboard?categories=text" target="_blank" rel="noopener">leaderboards</a> can be good places to find new models to use, like this <a href="https://huggingface.co/spaces/bigcode/bigcode-models-leaderboard" target="_blank" rel="noopener">code generation</a> leaderboard or this <a href="https://huggingface.co/spaces/DontPlanToEnd/UGI-Leaderboard" target="_blank" rel="noopener">uncensored</a> one. When you find which model you’d like, you can download it directly from LM Studio by searching for it, selecting the right quantization, and downloading.</p> <h2>Larger model selection and performance optimization</h2> <p>What if you want to run even bigger models while still keeping speed high? One path to run faster local inference is through using Mixture of Experts (MoE) models. These are models which have a larger number of total parameters, but a smaller number that are active on any specific token. An excellent example of this for 128GB Framework Desktop configurations is OpenAI’s gpt-oss-120b. This is a model with 117B total parameters, but only 5.1B of them are active at a time. This model is also natively designed around 4-bit quantization.</p> <p class="block-img"><img src="https://images.prismic.io/frameworkmarketplace/aTChfXNYClf9nyMn_Screenshot2025-11-26141933.png?auto=format,compress" alt="OpenAI gpt-oss-120b" width="1800" height="1140" /></p> <p>You’ll notice when running this that it is a reasoning model, which means it has a thinking phase where it breaks down and thinks through your prompt step by step before answering. It’s especially helpful to have an MoE model for this, since the thinking phase could otherwise be slow. Framework Desktop is especially well suited for MoE models, since you can configure it with a large amount of memory, and the smaller active parameter count means it can run faster. A Framework Desktop can run this model at around 40 tok/s (48 tok/s on Linux).</p> <p class="block-img"><img src="https://images.prismic.io/frameworkmarketplace/aF8VUXfc4bHWizXj_006.png?auto=format,compress" alt="Chatting with the LLM" width="1123" height="917" /></p> <p>When loading a model, you can also toggle “Show advanced settings” to go deeper into optimization. Two settings you may find yourself adjusting are Context Length and Flash Attention. Context Length is effectively the attention span of the model, so having longer length helps a lot for both conversation/roleplay and code generation use cases. Increasing context length can substantially increase memory usage, but enabling Flash Attention helps mitigate that.</p> <p class="block-img"><img src="https://images.prismic.io/frameworkmarketplace/aTChyHNYClf9nyMt_Screenshot2025-11-26142724.png?auto=format,compress" alt="Advanced settings for optimization" width="1800" height="1140" /></p> <h2>Configuring a Framework Desktop as a local AI PC</h2> <p>With up to 128GB of memory, 256GB/s of memory bandwidth, and a big integrated GPU on Ryzen AI Max, Framework Desktop is a great fit for running AI locally. With AMD’s Variable Graphics Memory functionality, up to 112GB of this is addressable by the GPU! In AMD Adrenaline, you can adjust Dedicated Graphics Memory to up to 96GB, and up to half of the remaining System Memory will also be used.</p> <p class="block-img"><img src="https://images.prismic.io/frameworkmarketplace/aGvcKEMqNJQqHm3x_008.png?auto=format,compress" alt="AMD’s Variable Graphics Memory Performance Tuning" width="1280" height="600" /></p> <p>We have 32GB, 64GB, and 128GB configurations of Framework Desktop. All three have the same 256GB/sec of memory bandwidth, which will typically be the performance bottleneck for LLM inference. The 64GB and 128GB have slightly larger integrated GPUs (40CU instead of 32CU), which matters more for AI workloads like image generation but less for text generation. This means that overall, you should select your configuration primarily based on how large of models you want to be able to run. As noted before, you’ll need a 128GB Framework Desktop to run models like gpt-oss-120b. There are also plenty of 20-30B parameter models that are a great fit for the 32GB and 64GB versions.</p> <p>Aside from that, when configuring your Framework Desktop DIY Edition, you’ll want to make sure you have enough storage space for all of the models you’ll be downloading, so 1TB or more is helpful. Note that there are two NVMe storage slots, so you can max out at up to 2x 8TB.</p> <p>For OS selection, both Windows and Linux work well with applications like LM Studio, but if you want to go deeper into using ROCm or PyTorch, you may find the development environment in a recent Linux distro like Fedora 43 to be smoother. As noted earlier, inference on Fedora 43 is also currently about 20% faster than on Windows, though we expect speed on both to continue to improve as AMD drives optimizations throughout the stack.</p> <p>That’s it for this first intro guide. We’ll be continuing the series with additional guides around more local AI use cases.</p> Wed, 03 Dec 2025 20:53:48 +0000