Robot Arm

Assembly Line

Design for Robotic Assembly

๐Ÿ“… Date:

โœ๏ธ Author: John Sprovieri

๐Ÿ”– Topics: Industrial Robot, Design for X, Robot Arm

๐Ÿข Organizations: SCHUNK, Bosch Rexroth


In reality, equating the abilities of robots and human assemblers is risky. Whatโ€™s easy for a human assembler can be difficult or impossible for a robot, and vice versa. To ensure success with robotic assembly, engineers must adapt their parts, products and processes to the unique requirements of the robot.

Reorienting an assembly adds cycle time without adding value. It also increases the cost of the fixtures. And, instead of a SCARA or Cartesian robot, assemblers may need a more expensive six-axis robot.

Robotic grippers are not as nimble as human hands, and some parts are easier for robots to grip than others. A part with two parallel surfaces can be handled by a two-fingered gripper. A circular part can be handled by its outside edges, or, if it has a hole in the middle, its inside edges. Adding a small lip to a part can help a gripper reliably manipulate the part and increase the efficiency of the system. If the robot will handle more than one type of part, the parts should be designed so they can all be manipulated with the same gripper. A servo-driven gripper could also help in that situation, since engineers can program stroke length and gripping force.

Read more at Assembly Magazine

๐Ÿง ๐Ÿฆพ RT-2: New model translates vision and language into action

๐Ÿ“… Date:

๐Ÿ”– Topics: Robot Arm, Transformer Net, Machine Vision, Vision-language-action Model

๐Ÿข Organizations: Google


Robotic Transformer 2 (RT-2) is a novel vision-language-action (VLA) model that learns from both web and robotics data, and translates this knowledge into generalised instructions for robotic control.

High-capacity vision-language models (VLMs) are trained on web-scale datasets, making these systems remarkably good at recognising visual or language patterns and operating across different languages. But for robots to achieve a similar level of competency, they would need to collect robot data, first-hand, across every object, environment, task, and situation.

In our paper, we introduce Robotic Transformer 2 (RT-2), a novel vision-language-action (VLA) model that learns from both web and robotics data, and translates this knowledge into generalised instructions for robotic control, while retaining web-scale capabilities.

Read more at Deepmind Blog

This 3D Printed Gripper Doesnโ€™t Need Electronics To Function

Rocsys wants to automate EV charging, starting in ports and yards

๐Ÿ“… Date:

โœ๏ธ Author: Rebecca Bellan

๐Ÿ”– Topics: Robot Arm, Funding Event

๐Ÿข Organizations: Rocsys, Hyster, Taylor Machine Works


Rocsys has created a robotic arm that can transform any electric vehicle charger into an autonomous charger. In yards and ports, where vehicle uptime is crucial and the margin for error is slim, being able to plug in a charger and remove it without manual intervention is not only attractive to logistics operators, but it has a use case today.

Aside from partnerships with companies like electric forklift company Hyster, industrial equipment supplier Taylor Machine Works and port operator SSA Marine, Rocsys claims to have a commercial partnership in the works with one of the largest Big Box retailers in North America.

Rocsys doesnโ€™t intend to stop with heavy duty, industrial logistics. The startup closed a $36 million Series A made up of half equity and half debt. The funds will help the startup build out its North American division and support R&D into the automotive sector, which would include both mainstream consumer vehicles and self-driving robotaxi fleets.

Read more at TechCrunch

๐Ÿง ๐Ÿฆพ RoboCat: A self-improving robotic agent

๐Ÿ“… Date:

๐Ÿ”– Topics: Robot Arm, Transformer Net

๐Ÿข Organizations: Google


RoboCat learns much faster than other state-of-the-art models. It can pick up a new task with as few as 100 demonstrations because it draws from a large and diverse dataset. This capability will help accelerate robotics research, as it reduces the need for human-supervised training, and is an important step towards creating a general-purpose robot.

RoboCat is based on our multimodal model Gato (Spanish for โ€œcatโ€), which can process language, images, and actions in both simulated and physical environments. We combined Gatoโ€™s architecture with a large training dataset of sequences of images and actions of various robot arms solving hundreds of different tasks.

The combination of all this training means the latest RoboCat is based on a dataset of millions of trajectories, from both real and simulated robotic arms, including self-generated data. We used four different types of robots and many robotic arms to collect vision-based data representing the tasks RoboCat would be trained to perform.

Read more at Deepmind Blog

We 3D Printed End-of-Arm Tools with Rapid Robotics

SRI Robotics: BACHโ€“Belt-Augmented Compliant Hand

How a robotic arm could help the US Army lift artillery shells

๐Ÿ“… Date:

โœ๏ธ Author: Kelsey Atherton

๐Ÿ”– Topics: Robot Arm

๐Ÿญ Vertical: Defense

๐Ÿข Organizations: US Army, Sarcos Robotics


To fire artillery faster, the US Army is turning to robotic arms. On December 1, Army Futures Command awarded a $1 million contract to Sarcos Technology and Robotics Corporation to test a robot system that can handle and move artillery rounds.

An automated system, using robot arms to fetch and ready artillery rounds, would function somewhat like a killer version of a vending machine arm. The human gunner could select the type of ammunition from internal stores, and then the robotic loader finds it, grabs it, and places it on a lift. Should the robot arm perform as expected in testing, it will eliminate a job that is all repetitive strain. The robot, lifting and loading ammunition, is now an autonomous machine, automating the dull and menial task of reading rounds to fire.

Read more at Popular Science

How a universal model is helping one generation of Amazon robots train the next

๐Ÿ“… Date:

โœ๏ธ Author: Sean O'Neill

๐Ÿ”– Topics: Robot Arm, Machine Learning, Warehouse Automation

๐Ÿข Organizations: Amazon


In short, building a dataset big enough to train a demanding machine learning model requires time and resources, with no guarantee that the novel robotic process you are working toward will prove successful. This became a recurring issue for Amazon Robotics AI. So this year, work began in earnest to address the data scarcity problem. The solution: a โ€œuniversal modelโ€ able to generalize to virtually any package segmentation task.

To develop the model, Meeker and her colleagues first used publicly available datasets to give their model basic classification skills โ€” being able to distinguish boxes or packages from other things, for example. Next, they honed the model, teaching it to distinguish between many types of packaging in warehouse settings โ€” from plastic bags to padded mailers to cardboard boxes of varying appearance โ€” using a trove of training data compiled by the Robin program and half a dozen other Amazon teams over the last few years. This dataset comprised almost half a million annotated images.

The universal model now includes images of unpackaged items, too, allowing it to perform segmentation across a greater diversity of warehouse processes. Initiatives such as multimodal identification, which aims to visually identify items without needing to see a barcode, and the automated damage detection program are accruing product-specific data that could be fed into the universal model, as well as images taken on the fulfillment center floor by the autonomous robots that carry crates of products.

Read more at Amazon Science

How Soft Robotics Enables Peeps Packaging