Sensor Fusion

Assembly Line

IBM and AWS partnering to transform industrial welding with AI and machine learning

📅 Date:

🔖 Topics: Welding, Machine Learning, Quality Assurance, Sensor Fusion

🏢 Organizations: IBM, AWS


IBM Smart Edge for Welding on AWS utilizes audio and visual capturing technology developed in collaboration with IBM Research. Using visual and audio recordings taken at the time of the weld, state-of-the-art artificial intelligence and machine learning models analyze the quality of the weld. If the quality does not meet standards, alerts are sent, and remediation action can take place without delay.

The solution substantially reduces the time between detection and remediation of defects, as well as the number of defects on the manufacturing line. By leveraging a combination of optical, thermal, and acoustic insights during the weld inspection process, two key manufacturing personas can better determine whether a welding discontinuity may result in a defect that will cost time and money: weld technician and process engineer.

Read more at IBM Blog

Sensor Fusion with AI Transforms the Smart Manufacturing Era

📅 Date:

✍️ Author: Majeed Ahmad

🔖 Topics: Sensor Fusion

🏢 Organizations: Bosch, STMicroelectronics


Bosch calls its new semiconductor fab in Dresden a smart factory with highly automated, fully connected machines and integrated processes combined with AI and internet of things (IoT) technologies for facilitating data-driven manufacturing. With machines that think for themselves and glasses with built-in cameras, maintenance work in this fab can be performed from 9,000 kilometers (about 5,592 miles) away.

STMicroelectronics (STMicro) has added compute power to sensing in what it calls an intelligent sensor processing unit (ISPU). It combines a DSP suited to run AI algorithms and MEMS sensor on the same chip. The merger of sensors and AI puts electronic decision-making on the edge, while enabling smart sensors to sense, process and take actions, bridging the fusion of technology and the physical world.

Read more at EE Times

Meta-Transformer: A Unified Framework for Multimodal Learning

📅 Date:

✍️ Authors: Yiyuan Zhang, Kaixiong Gong, Kaipeng Zhang, Hongsheng Li, Yu Qiao, Wanli Ouyang, Xiangyu Yue

🔖 Topics: Sensor Fusion, Machine Vision


Multimodal learning aims to build models that can process and relate information from multiple modalities. Despite years of development in this field, it still remains challenging to design a unified network for processing various modalities (e.g. natural language, 2D images, 3D point clouds, audio, video, time series, tabular data) due to the inherent gaps among them. In this work, we propose a framework, named Meta-Transformer, that leverages a frozen encoder to perform multimodal perception without any paired multimodal training data. In Meta-Transformer, the raw input data from various modalities are mapped into a shared token space, allowing a subsequent encoder with frozen parameters to extract high-level semantic features of the input data. Composed of three main components: a unified data tokenizer, a modality-shared encoder, and task-specific heads for downstream tasks, Meta-Transformer is the first framework to perform unified learning across 12 modalities with unpaired data. Experiments on different benchmarks reveal that Meta-Transformer can handle a wide range of tasks including fundamental perception (text, image, point cloud, audio, video), practical application (X-Ray, infrared, hyperspectral, and IMU), and data mining (graph, tabular, and time-series). Meta-Transformer indicates a promising future for developing unified multimodal intelligence with transformers.

Read more at arXiv

ImageBind: One Embedding Space To Bind Them All

📅 Date:

✍️ Authors: Rohit Girdhar, Alaaeldin El-Nouby, Zhuang Liu, Mannat Singh, Kalyan Vasudev Alwala, Armand Joulin, Ishan Misra

🔖 Topics: Sensor Fusion, Machine Vision


We present ImageBind, an approach to learn a joint embedding across six different modalities - images, text, audio, depth, thermal, and IMU data. We show that all combinations of paired data are not necessary to train such a joint embedding, and only image-paired data is sufficient to bind the modalities together. ImageBind can leverage recent large scale vision-language models, and extends their zero-shot capabilities to new modalities just by using their natural pairing with images. It enables novel emergent applications ‘out-of-the-box’ including cross-modal retrieval, composing modalities with arithmetic, cross-modal detection and generation. The emergent capabilities improve with the strength of the image encoder and we set a new state-of-the-art on emergent zero-shot recognition tasks across modalities, outperforming specialist supervised models. Finally, we show strong few-shot recognition results outperforming prior work, and that ImageBind serves as a new way to evaluate vision models for visual and non-visual tasks.

Read more at arXiv

Perceiver: General Perception with Iterative Attention

📅 Date:

✍️ Authors: Andrew Jaegle, Felix Gimeno, Andrew Brock, Andrew Zisserman, Oriol Vinyals, Joao Carreira

🔖 Topics: Sensor Fusion, Machine Vision


Biological systems perceive the world by simultaneously processing high-dimensional inputs from modalities as diverse as vision, audition, touch, proprioception, etc. The perception models used in deep learning on the other hand are designed for individual modalities, often relying on domain-specific assumptions such as the local grid structures exploited by virtually all existing vision models. These priors introduce helpful inductive biases, but also lock models to individual modalities. In this paper we introduce the Perceiver - a model that builds upon Transformers and hence makes few architectural assumptions about the relationship between its inputs, but that also scales to hundreds of thousands of inputs, like ConvNets. The model leverages an asymmetric attention mechanism to iteratively distill inputs into a tight latent bottleneck, allowing it to scale to handle very large inputs. We show that this architecture is competitive with or outperforms strong, specialized models on classification tasks across various modalities: images, point clouds, audio, video, and video+audio. The Perceiver obtains performance comparable to ResNet-50 and ViT on ImageNet without 2D convolutions by directly attending to 50,000 pixels. It is also competitive in all modalities in AudioSet.

Read more at arXiv