AI Picks Outfits With Abandon

Most of us choose our own outfits on a daily basis. [NeuroForge] decided that he’d instead offload this duty to artificial intelligence — perhaps more for the sake of a class project than outright fashion.

The concept involved first using an AI model to predict the weather. Those predictions would then be fed to a large language model (LLM), which would recommend an appropriate outfit for the conditions. The output from the LLM would be passed to a simple alarm clock which would wake [NeuroForge] and indicate what he should wear for the day. Amazon’s Chronos forecasting model was used for weather prediction based on past weather data, while Meta’s Llama3.1 LLM was used to make the clothing recommendations. [NeuroForge] notes that it was possible to set all this up to work without having to query external services once the historical weather data had been sourced.

While the AI choices often involved strange clashes and were not weather appropriate, [NeuroForge] nonetheless followed through and wore what he was told. This got tough when the outfit on a particularly cold day was a T-shirt and shorts, though the LLM did at least suggest a winter hat and gloves be part of the ensemble. Small wins, right?

We’ve seen machine learning systems applied to wardrobe-related tasks before. One wonders if a more advanced model could be trained to pick not just seasonally-appropriate clothes, but to also assemble actually fashionable outfits to boot. If you manage to whip that up, let us know on the tipsline. Bonus points if your ML system gets a gig on the reboot of America’s Next Top Model.

Continue reading “AI Picks Outfits With Abandon”

Why LLMs Are Less Intelligent Than Crows

The basic concept of human intelligence entails self-awareness alongside the ability to reason and apply logic to one’s actions and daily life. Despite the very fuzzy definition of ‘human intelligence‘, and despite many aspects of said human intelligence (HI) also being observed among other animals, like crows and orcas, humans over the ages have always known that their brains are more special than those of other animals.

Currently the Cattell-Horn-Carroll (CHC) theory of intelligence is the most widely accepted model, defining distinct types of abilities that range from memory and processing speed to reasoning ability. While admittedly not perfect, it gives us a baseline to work with when we think of the term ‘intelligence’, whether biological or artificial.

This raises the question of how in the context of artificial intelligence (AI) the CHC model translate to the technologies which we see in use today. When can we expect to subject an artificial intelligence entity to an IQ test and have it handily outperform a human on all metrics?

Continue reading “Why LLMs Are Less Intelligent Than Crows”

A Bird Watching Assistant

When AI is being touted as the latest tool to replace writers, filmmakers, and other creative talent it can be a bit depressing staring down the barrel of a future dystopia — especially since most LLMs just parrot their training data and aren’t actually creative. But AI can have some legitimate strengths when it’s taken under wing as an assistant rather than an outright replacement.

For example [Aarav] is happy as a lark when birdwatching, but the birds aren’t always around and it can sometimes be a bit of a wild goose chase waiting hours for them to show up. To help him with that he built this machine learning tool to help alert him to the presence of birds.

The small device is based on a Raspberry Pi 5 with an AI hat nested on top, and uses a wide-angle camera to keep an eagle-eyed lookout of a space like a garden or forest. It runs a few scripts in Python leveraging the OpenCV library, which is a widely available machine learning tool that allows users to easily interact with image recognition. When perched to view an outdoor area, it sends out an email notification to the user’s phone when it detects bird activity so that they can join the action swiftly if they happen to be doing other things at the time. The system also logs hourly bird-counts and creates a daily graph, helping users identify peak bird-watching times.

Right now the system can only detect the presence of birds in general, but he hopes to build future versions that can identify birds with more specificity, perhaps down to the species. Identifying birds by vision is certainly one viable way of going about this process, but one of our other favorite bird-watching tools was demonstrated by [Benn Jordan] which uses similar hardware but listens for bird calls rather than looking for the birds with a vision-based system.

Continue reading “A Bird Watching Assistant”

Dual-Arm Mobile Bot Built On IKEA Cart Costs Hundreds, Not Thousands

There are many incredible open-source robotic arm projects out there, but there’s a dearth of affordable, stable, and mobile robotic platforms with arms. That’s where XLeRobot comes in. It builds on the fantastic LeRobot framework to make a unit that can be trained for autonomous tasks via machine learning, as well as operated remotely.

XLeRobot, designed by [Vector Wang], has a pretty clever design that makes optimal use of easy to obtain parts. In addition to the mostly 3D-printed hardware, it uses an IKEA cart with stacked bin-like shelves as its main frame.

The top bin holds dual arms and a central stalk with a “head”. There’s still room left in that top bin, a handy feature that gives the robot a place to stow or carry objects.

The bottom of the cart gets the three-wheeled motion unit. Three omnidirectional wheels provide a stable base while also allowing the robot to propel itself in any direction and turn on a dime. The motion unit bolts to the bottom, but because the IKEA cart’s shelf bottoms are a metal mesh, no drilling is required.

It’s all very tidy, and results in a mobile robotics platform that is cheap enough for most hobbyists to afford, while being big enough to navigate indoor environments and do useful tasks.

Continue reading “Dual-Arm Mobile Bot Built On IKEA Cart Costs Hundreds, Not Thousands”

Pong Cloned By Neural Network

Although not the first video game ever produced, Pong was the first to achieve commercial success and has had a tremendous influence on our culture as a whole. In Pong’s time, its popularity ushered in the arcade era that would last for more than two decades. Today, it retains a similar popularity partially for approachability: gameplay is relatively simple, has hardwired logic, and provides insights about the state of computer science at the time. For these reasons, [Nick Bild] has decided to recreate this arcade classic, but not in a traditional way. He’s trained a neural network to become the game instead.

Continue reading “Pong Cloned By Neural Network”

LeRobot Brings Autonomy To Hobby Robots

Robotic arms have a lot in common with CNC machines in that they are usually driven by a fixed script of specific positions to move to, and actions to perform. Autonomous behavior isn’t the norm, especially not for hobby-level robotics. That’s changing rapidly with LeRobot, an open-source machine learning framework from the Hugging Face community.

The SO-101 arm is an economical way to get started.

If a quick browse of the project page still leaves you with questions, you’re not alone. Thankfully, [Ilia] has a fantastic video that explains and demonstrates the fundamentals wonderfully. In it, he shows how LeRobot allows one to train an economical 3D-printed robotic arm by example, teaching it to perform a task autonomously. In this case, the task is picking up a ball and putting it into a cup.

[Ilia] first builds a dataset by manually operating the arm to pick up a ball and place it in a cup. Then, with a dataset consisting of only about fifty such examples, he creates a machine learning model capable of driving the arm to autonomously pick up a ball and place it in a cup, regardless of where the ball and cup actually are. It even gracefully handles things like color changes and [Ilia] moving the cup and ball around mid-task. You can skip directly to 34:16 to see this autonomous behavior in action, but we do recommend watching the whole video for a highly accessible yet deeply technical overview.

Continue reading “LeRobot Brings Autonomy To Hobby Robots”

Reachy The Robot Gets A Mini (Kit) Version

Reachy Mini is a kit for a compact, open-source robot designed explicitly for AI experimentation and human interaction. The kit is available from Hugging Face, which is itself a repository and hosting service for machine learning models. Reachy seems to be one of their efforts at branching out from pure software.

Our guess is that some form of Stewart Platform handles the head movement.

Reachy Mini is intended as a development platform, allowing people to make and share models for different behaviors, hence the Hugging Face integration to make that easier. On the inside of the full version is a Raspberry Pi, and we suspect some form of Stewart Platform is responsible for the movement of the head. There’s also a cheaper (299 USD) “lite” version intended for tethered use, and a planned simulator to allow development and testing without access to a physical Reachy at all.

Reachy has a distinctive head and face, so if you’re thinking it looks familiar that’s probably because we first covered Reachy the humanoid robot as a project from Pollen Robotics (Hugging Face acquired Pollen Robotics in April 2025.)

The idea behind the smaller Reachy Mini seems to be to provide a platform to experiment with expressive human communication via cameras and audio, rather than to be the kind of robot that moves around and manipulates objects.

It’s still early in the project, so if you want to know more you can find a bit more information about Reachy Mini at Pollen’s site and you can see Reachy Mini move in a short video, embedded just below.

Continue reading “Reachy The Robot Gets A Mini (Kit) Version”