Generative AI

Assembly Line

Eureka! NVIDIA Research Breakthrough Puts New Spin on Robot Learning

📅 Date:

✍️ Author: Angie Lee

🔖 Topics: Generative AI, Large Language Model, Industrial Robot, Reinforcement Learning

🏢 Organizations: NVIDIA


A new AI agent developed by NVIDIA Research that can teach robots complex skills has trained a robotic hand to perform rapid pen-spinning tricks — for the first time as well as a human can. The Eureka research, published today, includes a paper and the project’s AI algorithms, which developers can experiment with using NVIDIA Isaac Gym, a physics simulation reference application for reinforcement learning research. Isaac Gym is built on NVIDIA Omniverse, a development platform for building 3D tools and applications based on the OpenUSD framework. Eureka itself is powered by the GPT-4 large language model.

Read more at NVIDIA Blog

To excel at engineering design, generative AI must learn to innovate, study finds

📅 Date:

✍️ Author: Jennifer Chu

🔖 Topics: Generative AI, Generative Design

🏢 Organizations: MIT


“Deep generative models (DGMs) are very promising, but also inherently flawed,” says study author Lyle Regenwetter, a mechanical engineering graduate student at MIT. “The objective of these models is to mimic a dataset. But as engineers and designers, we often don’t want to create a design that’s already out there.” He and his colleagues make the case that if mechanical engineers want help from AI to generate novel ideas and designs, they will have to first refocus those models beyond “statistical similarity.”

For instance, if DGMs can be built with other priorities, such as performance, design constraints, and novelty, Ahmed foresees “numerous engineering fields, such as molecular design and civil infrastructure, would greatly benefit. By shedding light on the potential pitfalls of relying solely on statistical similarity, we hope to inspire new pathways and strategies in generative AI applications outside multimedia.”

Read more at MIT News

Making Conversation: Using AI to Extract Intel from Industrial Machinery and Equipment

📅 Date:

✍️ Author: Rehana Begg

🔖 Topics: Large Language Model, Generative AI

🏭 Vertical: Automotive

🏢 Organizations: iNAGO


What if your machine could talk? This is the question Ron Di Carlantonio has grappled with since he founded iNAGO 1998. iNAGO was onboard when the Government of Canada supported a lighthouse project led by the Automotive Parts Manufacturers’ Association (APMA) to design, engineer and build a connected and autonomous zero-emissions vehicle (ZEV) concept car and its digital twin that would validate and integrate autonomous technologies. The electric SUV is equipped with a dual-motor powertrain with total output of 550 hp and 472 lb-ft of torque.

The general use of AI-based solutions in the automotive industry stretches across the lifecycle of a vehicle, from design and manufacturing to sales and aftermarket care. AI-powered chatbots, in particular, deliver instant, personalized virtual driver assistance, are on call 27/7 and can evolve with the preferences of tech-savvy drivers. Di Carlantonio now sees an opportunity to extend the use of the intelligent assistant platform to the smart factory by making industrial equipment—CNC machines, presses, conveyors, industrial robots—talk.

Read more at Machine Design

Toyota Research Institute Unveils Breakthrough in Teaching Robots New Behaviors

📅 Date:

🔖 Topics: Industrial Robot, Generative AI, Diffusion Policy, Large Behavior Model

🏢 Organizations: Toyota


The Toyota Research Institute (TRI) announced a breakthrough generative AI approach based on Diffusion Policy to quickly and confidently teach robots new, dexterous skills. This advancement significantly improves robot utility and is a step towards building “Large Behavior Models (LBMs)” for robots, analogous to the Large Language Models (LLMs) that have recently revolutionized conversational AI.

TRI has already taught robots more than 60 difficult, dexterous skills using the new approach, including pouring liquids, using tools, and manipulating deformable objects. These achievements were realized without writing a single line of new code; the only change was supplying the robot with new data. Building on this success, TRI has set an ambitious target of teaching hundreds of new skills by the end of the year and 1,000 by the end of 2024.

Read more at Toyota Press

Solution Accelerator: LLMs for Manufacturing

📅 Date:

✍️ Authors: Will Block, Ramdas Murali, Nicole Lu, Bala Amavasai

🔖 Topics: Generative AI, Large Language Model

🏢 Organizations: Databricks


In this solution accelerator, we focus on item (3) above, which is the use case on augmenting field service engineers with a knowledge base in the form of an interactive context-aware Q/A session. The challenge that manufacturers face is how to build and incorporate data from proprietary documents into LLMs. Training LLMs from scratch is a very costly exercise, costing hundreds of thousands if not millions of dollars.

Instead, enterprises can tap into pre-trained foundational LLM models (like MPT-7B and MPT-30B from MosaicML) and augment and fine-tune these models with their proprietary data. This brings down the costs to tens, if not hundreds of dollars, effectively a 10000x cost saving.

Read more at Databricks Blog

The treacherous path to trustworthy Generative AI for Industry

📅 Date:

✍️ Author: Geir Engdahl

🔖 Topics: Generative AI, Large Language Model

🏢 Organizations: Cognite


Despite the awesome first impact ChatGPT showed and the already significant efficiency gain programming copilots are delivering to developers as users2, making LLMs serve non-developers – the vast majority of the workforce, that is – by having LLMs translate from natural language prompts to API or database queries, expecting readily usable analytics outputs, is not quite so straightforward. Three primary challenges are:

  • Inconsistency of prompts to completions (no deterministic reproducibility between LLM inputs and outputs)
  • Nearly impossible to audit or explain LLM answers (once trained, LLMs are black boxes)
  • Coverage gap on niche domain areas that typically matter most to enterprise users (LLMs are trained on large corpora of internet data, heavily biased towards more generalist topics)

Read more at Cognite Blog

Frontline Copilot | The greatest advancement of the year?? | Digital Factory 2023

Chevron Phillips Chemical tackles generative AI with Databricks

Lumafield Introduces Atlas, an AI Co-Pilot for Engineers

📅 Date:

🔖 Topics: Generative AI, Large Language Model

🏢 Organizations: Lumafield


Lumafield today unveiled Atlas, a groundbreaking AI co-pilot that helps engineers work faster by answering questions and solving complex engineering and manufacturing challenges using plain language. Atlas is a new tool in Voyager, Lumafield’s cloud-based software for analyzing 3D scan and industrial CT scan data. Along with Atlas, Lumafield announced a major expansion of Voyager’s capabilities, including the ability to upload, analyze, and share data from any 3D scanner.

Read more at Lumafield Articles

🧠 AI PCB Design: How Generative AI Takes Us From Constraints To Possibilities

📅 Date:

✍️ Author: Michael Jackson

🔖 Topics: Generative AI, Generative Design

🏭 Vertical: Computer and Electronic

🏢 Organizations: Cadence


Cadence customers are already reaping the benefits of generative AI within our Joint Enterprise Data and AI (JedAI) Platform. Chip designers are realizing Cadence Cerebrus AI to design chips that are faster, cheaper, and more energy efficient. Now, we’re bringing this generative AI approach to an area of EDA that has traditionally been highly manual—PCB placement and routing.

Allegro X AI flips the PCB design process on its head. Rather than present the operator with a blank canvas, it will take a list of components and constraints that need to be satisfied in the end result and sift through a plethora of design possibilities, encompassing varied placement and routing options. This is hugely powerful for hardware engineers focused on design space exploration (DSE). This has long been par for the course in IC design yet it has more recently become critical to PCB due to the fact that today’s IC complexity doesn’t reduce when it gets onto the PCB—it increases.

However, it’s important to understand that this isn’t Cadence replacing traditional compute algorithms and automation approaches with AI. We remain as committed to accuracy and “correct by construction” as we’ve ever been, and while Allegro X AI is trained on extensive real-world datasets of successful and failed designs, we don’t use that data to determine correctness.

Read more at Semiconductor Engineering

🧠 Toyota and Generative AI: It’s Here, and This is How We’re Using It

📅 Date:

🔖 Topics: Generative AI, Predictive Maintenance

🏢 Organizations: Toyota


Toyota’s initial goal in 2016 was to engineer a resilient cloud safety system, and that led to the development of Safety Connect, a service powered by Drivelink from software company Toyota Connected North America (TCNA). The Safety Connect service is designed to leverage key data points from the vehicle to identify when a collision has occurred and send an automatic notification to call center agents. Should the driver become unconscious, telematics information can provide a more complete picture of the situation, enabling agents to contact authorities faster when it’s needed most.

Vehicle maintenance has also been a focus of AI-driven enhancements. Connected vehicles have hundreds of sensors, and we have been using data from these vehicles to build machine learning models for the most common maintenance items, including batteries, brakes, tires, and oil, and are currently investigating dozens of other components, using daily streaming data from millions of connected and consented vehicles. This suite of predictive maintenance models will help make customers aware of potential maintenance needs prior to component failures, so they can enjoy more reliable mobility experiences.

Read more at Toyota Pressroom

AI and AM: A Powerful Synergy

📅 Date:

✍️ Author: Robin Tuluie

🔖 Topics: Additive Manufacturing, Computer-aided Engineering, Generative AI


There’s an urgent opportunity, right now, to fully exploit the tools of computer-aided engineering (CFD, FEA, electromagnetic simulation and more) using the capabilities of AI. Yes, we’re talking about design optimization—but it’s optimization like never before, automated with machine learning, at a speed and level of precision far beyond what can be accomplished by most manufacturers today.

AI accomplishes this feat by solving the CFD or FEA equations in a non-traditional way: machine learning examines, and then emulates, the overall physical behavior of a design, not every single math problem that underlies that behavior.

Read more at Design and Development Today

🧑‍🏭🧠 Hitachi to use generative AI to pass expert skills to next generation

📅 Date:

✍️ Authors: Yoichiro Hiroi, Tsuyoshi Tamesue

🔖 Topics: Generative AI, Worker Training

🏢 Organizations: Hitachi


Japan’s Hitachi will utilize generative artificial intelligence to pass on expert skills in maintenance and manufacturing to newer workers, aiming to blunt the impact of mass retirements of experienced employees. The company will use the technology to generate videos depicting difficulties or accidents at railways, power stations and manufacturing plants and use them in virtual training for employees.

Hitachi already has developed an AI system that creates images based on 3D data of plants and infrastructure. It projects possible malfunctions – smoke, a cave-in, a rail buckling – onto an image of an actual rail track. This can also be done on images of manufacturing sites, including metal processing and assembly lines. Hitachi will merge this technology into a program for virtual drills that is now under development.

Read more at Nikkei Asia

⛓️🧠 Multinationals turn to generative AI to manage supply chains

📅 Date:

✍️ Author: Oliver Telling

🔖 Topics: Generative AI, Supply Chain Control Tower

🏢 Organizations: Unilever, Siemens, Maersk, Pactum, Walmart, Scoutbee, Altana


Navneet Kapoor, chief technology officer at Maersk, said “things have changed dramatically over the past year with the advent of generative AI”, which can be used to build chatbots and other software that generates responses to human prompts.

New supply chain laws in countries such as Germany, which require companies to monitor environmental and human rights issues in their supply chains, have driven interest and investment in the area.

Read more at Financial Times

U. S. Steel Aims to Improve Operational Efficiencies and Employee Experiences with Google Cloud’s Generative AI

📅 Date:

🔖 Topics: Generative AI

🏢 Organizations: US Steel, Google


United States Steel Corporation (NYSE: X) (“U. S. Steel”) and Google Cloud today announced a new collaboration to build applications using Google Cloud’s generative artificial intelligence (“gen AI”) technology to drive efficiencies and improve employee experiences in the largest iron ore mine in North America. As a leading manufacturer engaging in gen AI with Google Cloud, U. S. Steel continues to advance its more than 100-year legacy of innovation.

The first gen AI-driven application that U. S. Steel will launch is called MineMind™ which aims to simplify equipment maintenance by providing optimal solutions for mechanical problems, saving time and money, and ultimately improving productivity. Underpinned by Google Cloud’s AI technology like Document AI and Vertex AI, MineMind™ is expected to not only improve the maintenance team’s experience by more easily bringing the information they need to their fingertips, but also save costs from more efficient use of technicians’ time and better maintained trucks. The initial phase of the launch will begin in September and will impact more than 60 haul trucks at U. S. Steel’s Minnesota Ore Operations facilities, Minntac and Keetac.

Read more at Business Wire

Ansys Accelerates Innovation by Expanding AI Offerings with New Virtual Assistant

📅 Date:

🔖 Topics: Generative AI

🏢 Organizations: Ansys, Microsoft


Expanding artificial intelligence (AI) integration across its simulation portfolio and customer community, Ansys (NASDAQ: ANSS) announced the limited beta release of AnsysGPT, a multilingual, conversational, AI virtual assistant set to revolutionize the way Ansys customers receive support. Developed using state-of-the-art ChatGPT technology available via the Microsoft Azure OpenAI Service, AnsysGPT uses well-sourced Ansys public data to answer technical questions concerning Ansys products, relevant physics, and engineering topics within one comprehensive tool.

Expected in early 2024, AnsysGPT will optimize technical support for customers — delivering information and solutions more efficiently, furthering the democratization of simulation. While currently in beta testing with select customers and channel partners, upon its full release next year AnsysGPT will provide easily accessible 24/7 technical support through the Ansys website. Unlike general AI virtual assistants that use unsupported information, AnsysGPT is trained using Ansys data to generate tailored, applicable responses drawn from reliable Ansys resources including, but not limited to, Ansys Innovation Courses, technical documentation, blog articles, and how-to-videos. Strong controls were put in place to ensure that no proprietary information of any kind was used during the training process, and that customer inputs are not stored or used to train the system in any way.

Read more at Ansys News

Sight Machine Factory CoPilot Democratizes Industrial Data With Generative AI

📅 Date:

🔖 Topics: Generative AI

🏢 Organizations: Sight Machine, Microsoft


Sight Machine Inc. today announced the release of Factory CoPilot, democratizing industrial data through the power of generative artificial intelligence. By integrating Sight Machine’s Manufacturing Data Platform with Microsoft Azure OpenAI Service, Factory CoPilot brings unprecedented ease of access to manufacturing problem solving, analysis and reporting.

Using a natural language user interface similar to ChatGPT, Factory CoPilot offers an intuitive, “ask the expert” experience for all manufacturing stakeholders, regardless of data proficiency. In response to a single question, Factory CoPilot can automatically summarize all relevant data and information about production in real-time (e.g., for daily meetings) and generate user-friendly reports, emails, charts and other content (in any language) about the performance of any machine, line or plant across the manufacturing enterprise, based on contextualized data in the Sight Machine platform.

Read more at Sight Machine Press

Utility AI Beta

Retentive Network: A Successor to Transformer for Large Language Models

📅 Date:

✍️ Authors: Yutao Sun, Li Dong, Shaohan Huang, Shuming Ma, Yuqing Xia, Jilong Xue, Jianyong Wang, Furu Wei

🔖 Topics: Retentive Network, Transformer, Large Language Model, Generative AI


In this work, we propose Retentive Network (RetNet) as a foundation architecture for large language models, simultaneously achieving training parallelism, low-cost inference, and good performance. We theoretically derive the connection between recurrence and attention. Then we propose the retention mechanism for sequence modeling, which supports three computation paradigms, i.e., parallel, recurrent, and chunkwise recurrent. Specifically, the parallel representation allows for training parallelism. The recurrent representation enables low-cost O(1) inference, which improves decoding throughput, latency, and GPU memory without sacrificing performance. The chunkwise recurrent representation facilitates efficient long-sequence modeling with linear complexity, where each chunk is encoded parallelly while recurrently summarizing the chunks. Experimental results on language modeling show that RetNet achieves favorable scaling results, parallel training, low-cost deployment, and efficient inference. The intriguing properties make RetNet a strong successor to Transformer for large language models.

Read more at arXiv

LongNet: Scaling Transformers to 1,000,000,000 Tokens

📅 Date:

✍️ Authors: Jiayu Ding, Shuming Ma, Li Dong, Xingxing Zhang, Shaohan Huang, Wenhui Wang, Nanning Zheng, Furu Wei

🔖 Topics: Transformer, Large Language Model, Generative AI


Scaling sequence length has become a critical demand in the era of large language models. However, existing methods struggle with either computational complexity or model expressivity, rendering the maximum sequence length restricted. To address this issue, we introduce LongNet, a Transformer variant that can scale sequence length to more than 1 billion tokens, without sacrificing the performance on shorter sequences. Specifically, we propose dilated attention, which expands the attentive field exponentially as the distance grows. LongNet has significant advantages: 1) it has a linear computation complexity and a logarithm dependency between any two tokens in a sequence; 2) it can be served as a distributed trainer for extremely long sequences; 3) its dilated attention is a drop-in replacement for standard attention, which can be seamlessly integrated with the existing Transformer-based optimization. Experiments results demonstrate that LongNet yields strong performance on both long-sequence modeling and general language tasks. Our work opens up new possibilities for modeling very long sequences, e.g., treating a whole corpus or even the entire Internet as a sequence.

Read more at arXiv

🧠 Toyota Research Institute Unveils New Generative AI Technique for Vehicle Design

📅 Date:

🔖 Topics: Generative AI

🏭 Vertical: Automotive

🏢 Organizations: Toyota


Toyota Research Institute (TRI) today unveiled a generative artificial intelligence (AI) technique to amplify vehicle designers. Currently, designers can leverage publicly available text-to-image generative AI tools as an early step in their creative process. With TRI’s new technique, designers can add initial design sketches and engineering constraints into this process, cutting down the iterations needed to reconcile design and engineering considerations.

TRI researchers released two papers describing how the technique incorporates precise engineering constraints into the design process. Constraints like drag (which affects fuel efficiency) and chassis dimensions like ride height and cabin dimensions (which affect handling, ergonomics, and safety) can now be implicitly incorporated into the generative AI process. The team tied principles from optimization theory, used extensively for computer-aided engineering, to text-to-image-based generative AI. The resulting algorithm allows the designer to optimize engineering constraints while maintaining their text-based stylistic prompts to the generative AI process.

Read more at Toyota Newsroom

3DGPT - your 3D printing friend & collaborator!

Demo: Cognite Data Fusion's Generative AI Copilot

🧠 What is Visual Prompting?

📅 Date:

✍️ Author: Mark Sabini

🔖 Topics: Generative AI

🏢 Organizations: Landing AI


Landing AI’s Visual Prompting capability is an innovative approach that takes text prompting, used in applications such as ChatGPT, to computer vision. The impressive part? With only a few clicks, you can transform an unlabeled dataset into a deployed model in mere minutes. This results in a significantly simplified, faster, and more user-friendly workflow for applying computer vision.

In a quest to make Visual Prompting more practical for customers, we studied 40 projects across the manufacturing, agriculture, medical, pharmaceutical, life sciences, and satellite imagery verticals. Our analysis revealed that Visual Prompting alone could solve just 10% of the cases, but the addition of simple post-processing logic increases this to 68%.

Read more at Landing AI Blog

Retrocausal Revolutionizes Manufacturing Process Management with Industry-First Generative AI LeanGPT™ offering

📅 Date:

🔖 Topics: Generative AI, ChatGPT

🏢 Organizations: Retrocausal


Retrocausal, a leading manufacturing process management platform provider, today announced the release of LeanGPT™, its proprietary foundation models specialized for the manufacturing domain. The company also launched Kaizen Copilot™, Retrocausal’s first LeanGPT application that assists industrial engineers in designing and continuously improving manufacturing assembly processes and integrates Lean Six Sigma and Toyota Production Systems (TPS) principles favored by Industrial Engineers (IEs). The industry-first solution gathers intelligence from Retrocausal’s computer vision and IoT-based floor analytics platform Pathfinder. In addition, it can be connected to an organization’s knowledge bases, including Continuous Improvement (CI) systems, Quality Management Systems (QMS), and Manufacturing Execution Systems (MES) systems, in a secure manner.

Read more at Globe Newswire

What does it take to talk to your Industrial Data in the same way we talk to ChatGPT?

📅 Date:

✍️ Author: Jason Schern

🔖 Topics: Generative AI, Large Language Model

🏢 Organizations: Cognite


The vast data set used to train LLMs is curated in various ways to provide clean, contextualized data. Contextualized data includes explicit semantic relationships within the data that can greatly affect the quality of the model’s output. Contextualizing the data we provide as input to an LLM ensures that the text consumed is relevant to the task at hand. For example, when prompting an LLM to provide information about operating industrial assets, the data provided to the LLM should include not only the data and documents related to those assets but also the explicit and implicit semantic relationships across different data types and sources.

An LLM is trained by parceling text data into smaller collections, or chunks, that can be converted into embeddings. An embedding is simply a sophisticated numerical representation of the ‘chunk’ of text that takes into consideration the context of surrounding or related information. This makes it possible to perform mathematical calculations to compare similarities, differences, and patterns between different ‘chunks’ to infer relationships and meaning. These mechanisms enable an LLM to learn a language and understand new data that it has not seen previously.

Read more at Cognite Blog

Will Generative AI finally turn data swamps into contextualized operations insight machines?

📅 Date:

🔖 Topics: Large Language Model, Generative AI

🏢 Organizations: Cognite


Generative AI, such as ChatGPT/GPT-4, has the potential to put industrial digital transformation into hyperdrive. Whereas a process engineer might spend several hours performing “human contextualization” (at an hourly rate of $140 or more) manually – again and again – contextualized industrial knowledge graphs provide the trusted data relationships that enable Generative AI to accurately navigate and interpret data for Operators without requiring data engineering or coding competencies.

Read more at Cognite Blog