How AI is Transforming Manufacturing Webinar Agenda TIME TOPIC - - PowerPoint PPT Presentation

how ai is transforming manufacturing
SMART_READER_LITE
LIVE PREVIEW

How AI is Transforming Manufacturing Webinar Agenda TIME TOPIC - - PowerPoint PPT Presentation

How AI is Transforming Manufacturing Webinar Agenda TIME TOPIC KEY ITEMS PRESENTER Rob Capozziello 3 Introduction & Housekeeping About Zoom, Q&A, Agenda EVP Services, Mariner Mitch Landess 5 Conexus Intro Conexus


slide-1
SLIDE 1

How AI is Transforming Manufacturing

Webinar

slide-2
SLIDE 2

TIME TOPIC KEY ITEMS PRESENTER 3 Introduction & Housekeeping

  • About Zoom, Q&A, Agenda

Rob Capozziello

EVP Services, Mariner

5 Conexus Intro

  • Conexus overview

Mitch Landess

VP Innovation and Digital Transformation Conexus

10 Transforming Quality Performance with Applied AI

  • Digital feedback loops
  • The next frontier of manufacturing efficiency
  • High impact applications in manufacturing

David Breaugh

Manufacturing Business Lead, Microsoft

20 Data Driven Decision Making for the Factory Floor

  • Creating real change in manufacturing
  • Case Study: Global Chemical & Textiles Company

Robbie Jones

Enterprise Sales Manager

40 Deep Dive into Deep Learning for Visual Inspection

  • Case Study: Automotive Fabric Inspection w/Deep

Learning

  • Myths and Challenges in Deploying Deep Learning
  • Driving real business value and how to get started

Stephen Welch

VP Data Science Mariner

15 Q&A

Moderator: Rob Capozziello

Agenda

slide-3
SLIDE 3

Microsoft

Multiple catalytic innovations are enabling digital feedback loops

slide-4
SLIDE 4

Microsoft

Leveraging innovation to enable the next frontier of manufacturing efficiency

Top challenges with current CI programs

▪ Making changes (and results) sustainable ▪ Deploying what works with speed and scale ▪ Finding and unlocking new funding sources Law of diminishing returns Next gen efficient frontier

Scale innovation across value chains

▪ Connectivity ▪ Flexible Automation ▪ Intelligence

INTELLIGENT OPERATIONS PLATFORM

DIST MFG ENG SERVICE

CUSTOMER

ASSEMBLY

Amplify with algorithmic decision making and automated execution Move from reactive to predictive with big data, machine learning, IoT Leverage the cloud to connect, automate, visualize end-to-end business view

slide-5
SLIDE 5

Microsoft

Artificial Intelligence - High impact applications in manufacturing

Machine Learning Object Recognition BOT Services Speech Recognition Knowledge Mining Machine Translation Machine Teaching

slide-6
SLIDE 6

Data Driven Decision Making for the Factory Floor

Robbie Jones, Mariner

slide-7
SLIDE 7

Mariner – Manufacturing Analytics

AI/MACHINE LEARNING DATA SCIENCE IOT AND IOT EDGE DATA VISUALIZATION INFORMATION LIFE CYCLE MANAGEMENT & GOVERNANCE Right-Sized, Agile AI/ML, IoT, Analytics, Data Science Teams Analytics Teams-as-a-Service IP Solutions MODERN DATA WAREHOUSE/ESTATE CLOUD DATA PLATFORM Project-based Services BI/DW Analytics Reporting

slide-8
SLIDE 8

Spyglass Connected Factory

Your Virtual Production Manager

Be the Change Agent

slide-9
SLIDE 9

The Problem with Industrial Process Improvement

From Adam Smith’s “The Wealth of Nations” through the Toyota Production System, manufacturers have historically sought ways to eliminate waste from industrial process systems. From Statistical Process Control to Six Sigma to Lean, these methods have delivered measurable improvements leading to reducing waste. For mature companies, the value from low hanging fruit has been captured. To gain more value, manufacturers must leverage new techniques and technologies.

Manufacturers have made significant investments in continuous improvement methods SPC, 6 Sigma, Lean have delivered significant improvements Much of the value has been realized. New approaches are required to get more.

slide-10
SLIDE 10

Alerting/Monitoring

When detecting emergent conditions sooner rather than later will save time/money

OEE Analytics

Benchmark your progress. Measuring plant productivity is the first step towards improving it

Predictive Maintenance

Reduce uplanned downtime by predicting the probability of failures that impact operations

Spyglass Connected Factory

Your Virtual Production Manager

slide-11
SLIDE 11

Spyglass Connected Factory

Case Study – Milliken & Company Global Chemical & Textiles Company

slide-12
SLIDE 12

Business Challenges

  • No condition monitoring on valves &

motors

  • Root cause analysis on failed equipment

was difficult or nonexistent

  • Engineers spend hours per week producing

spreadsheets and analytics

slide-13
SLIDE 13

Results

  • Prioritized list of equipment that

can be serviced during an unplanned outage

  • AI on telemetry statistics to

identify root-cause of failures on critical equipment

  • Engineers spend less time

reporting and more time solving problems

slide-14
SLIDE 14

Industrial Equipment Manufacturer Results

  • Collect telemetry on presses for lot traceability and Kaizen
  • Ultimately use production rate information to change shift

patterns

  • Resulting in increased throughput and improved overall

production quality metrics

slide-15
SLIDE 15

Industrial Equipment Manufacturer Results

  • Monitor equipment for improper usage and failure to conduct preventative

maintenance process.

  • Deliver alerts and reminders to control room operators while also feeding

the information back into their management systems for full visibility into compliance with SOP (Remember – Your Virtualized Production Manager)

slide-16
SLIDE 16

Automotive Manufacturer Results

  • Apply basic IoT enabled devices to log important process variables

(temperature, speed, pressure, etc.) and automatically match them to lots and SKUs and real-time Statistical Process Control (SPC) Out-Of-Control (OOC) alerts

  • Use lot numbers to match images with production telemetry and actual

routing to alert department supervisors to the upstream process responsible for visible defects and provide a situation report for each incident

slide-17
SLIDE 17

Spyglass Connected Factory and AI

  • AI to monitor the accuracy of current MCBF numbers (mean cycles between failures) for your equipment

and compare to the planned outage schedule

  • AI forecasts the cycles by the time the outage is due, with a confidence limit, and suggests if this machine

should have a PM (planned maintenance) work order generated for the next planned outage, or if it can wait until the next outage

  • AI can create personalized MCBF. Some of the products you make are more destructive to your tooling and

equipment than other products, in effect altering the MCBF

  • Using a personalized MCBF can improve your forecasts and ultimately improve uptime.
slide-18
SLIDE 18

Spyglass Connected Factory and AI

  • Lots of part numbers, lots of machines, and lots of operators doing hundreds of setups and stoppages

within thousands of hours of scheduled work

  • Knowing where to find opportunities in your OEE can be problematic
  • Humans suffer from ego and exhaustion and often shed analytic workloads by focusing on outliers because

they are easy to spot

  • AI can help see the recurring patterns and clusters of behavior that are the systemic contributors to an

impaired OEE score

slide-19
SLIDE 19

Why Spyglass Connected Factory?

  • What would be valuable to know from your data? Why?
  • If your data was processed and summarized in a better way could you take specific actions to improve

Quality, Production, Maintenance activities?

  • Do you know why you have Downtime?
  • Is your production process telling you something, but you can't understand because of all the noise within

your data?

  • Do you want to have a better understanding of how your machines are performing or how your process is

functioning?

slide-20
SLIDE 20

The Path to Better Performance

Our Guaranteed Approach to Your Personal Virtual Production Manager Define Success

The Mariner team works with you to define your success criteria

Install & Train SCF

Connect Spyglass Connected Factory to your production lines and teach it to recognize recurring issues.

Be a Change Agent

Instead of fighting fires, you can mentor your teams to ensure you remain competitive. You are the change agent.

slide-21
SLIDE 21

Please send detailed questions to robbie.jones@mariner-usa.com For more information please visit https://mariner-usa.com/

slide-22
SLIDE 22

Deploying Deep Learning For Quality Inspection

Stephen Welch, Mariner

slide-23
SLIDE 23

1

2 3 4

Which Images Show Defects?

slide-24
SLIDE 24

Our data comes from a tricky fabric manufacturing problem

slide-25
SLIDE 25

Traditional machine vision systems use a two step process to make decisions

IMAGE CAPTURE Human Machine Interface Diversion Gates Pick-and-Place Robots

SOFTWARE COMPUTER VISION ALGORITHM Feature Extraction Decisioning

MAX_CONTRAST > THRESHOLD1 AND DEFECT_SIZE > THRESHOLD2?

COMPUTE

(Integrated or Discrete)

slide-26
SLIDE 26

1

2 3 4

Which Images Show Defects?

slide-27
SLIDE 27

GOOD

GOOD DEFECTIVE DEFECTIVE

Which Images Show Defects?

slide-28
SLIDE 28

TRADITIONAL MACHINE VISION

FEATURE EXTRACTION

MAX_CONTRAST > THRESHOLD1 AND DEFECT_SIZE > THRESHOLD2?

IMAGE CAPTURE PREDICTIONS/RESULTS IMAGE CAPTURE

DEEP LEARNING MODEL

These algorithms are typically designed

  • nce by vision system manufacturer,

and “baked in” to production software. DECISIONING May consist of many tunable parameters, often difficult to find

  • ptimal configuration, even for experts.

PREDICTIONS/RESULTS MODEL TRAINED ON YOUR DATA Deep learning model trained using labeled examples from your experts, and updated as conditions change.

slide-29
SLIDE 29

Alexnet Krizhevsky, Alex, Ilya Sutskever, and Geoffrey E. Hinton. "Imagenet classification with deep convolutional neural networks." Advances in neural information processing systems. 2012. ResNet He, Kaiming, et al. "Deep residual learning for image recognition." Proceedings of the IEEE conference on computer vision and pattern recognition. 2016.

slide-30
SLIDE 30

97.7%

ResNet Accuracy

30X 2-5X

Classification accuracy

  • n held out test set

False Rejects Reduction Improvement Over Manual Inspection

slide-31
SLIDE 31

James Carroll, Machine vision’s hottest technologies: How much are they used, where, how, and by whom? Our survey of 320 machine vision professionals. Vision System Design, Dec 6 2019.

So if Deep Learning is so great, why is it not used more in machine vision?

slide-32
SLIDE 32

Deep Learning Myths

1. How to operationalize/deploy? 2. Model maintenance – how to measure drift, and how often to retrain? 3. Change management – shifting data to the center of your quality processes.

Deep Learning Challenges

1. Deep Learning models need to be trained on very large datasets 2. Deep Learning models take a long time to train 3. You need a Data Scientist or Machine Learning expert on staff to use Deep Learning

slide-33
SLIDE 33

Paul Bergmann, Michael Fauser, David Sattlegger, Carsten Steger. MVTec AD - A Comprehensive Real-World Dataset for Unsupervised Anomaly Detection; in: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2019

Myth #1: Deep Learning models need to be trained on very large datasets.

slide-34
SLIDE 34

Classification Detection Segmentation

Let’s walk through training a deep learning classification model on leather examples from the mvtech dataset.

Myth #1: Deep Learning models need to be trained on very large datasets.

slide-35
SLIDE 35
  • We’ll train a ResNet-34 Deep

Learning Classification model on the leather dataset

  • 80/20 random train/test split
  • Note how few examples of defects

we have

  • We’ll show two techniques:
  • Training ResNet-34 “from scratch”
  • Transfer learning with a pretrained

model Myth #1: Deep Learning models need to be trained on very large datasets.

slide-36
SLIDE 36

Myth #1: Deep Learning models need to be trained on very large datasets.

Model trained “from scratch” Model pre-trained on ImageNet dataset (transfer learning)

Test set accuracy = 57/73 = 78% Training time = saturates after 30 minutes Test set accuracy = 73/73 = 100% Training time = ~10 minutes

slide-37
SLIDE 37
slide-38
SLIDE 38

Test set accuracy = 77/78 = 98.7% Training time = ~15 minutes

slide-39
SLIDE 39
slide-40
SLIDE 40

Test set accuracy = 71/96 = 73.9% Training time = saturates after ~30 minutes

More difficult problems do require more labeled data, we typically recommend starting with ~50 examples of each category.

slide-41
SLIDE 41

Deep Learning Myths

1. How to operationalize/deploy? 2. Model maintenance – how to measure drift, and how often to retrain? 3. Change management – shifting data to the center of your quality processes.

Deep Learning Challenges

1. Deep Learning models need to be trained on very large datasets 2. Deep Learning models take a long time to train 3. You need a Data Scientist or Machine Learning expert on staff to use Deep Learning

slide-42
SLIDE 42

Myth #2: Deep Learning models take a long time to train.

slide-43
SLIDE 43

Deep Learning Myths

1. How to operationalize/deploy? 2. Model maintenance – how to measure drift, and how often to retrain? 3. Change management – shifting data to the center of your quality processes.

Deep Learning Challenges

1. Deep Learning models need to be trained on very large datasets 2. Deep Learning models take a long time to train 3. You need a Data Scientist or Machine Learning expert on staff to use Deep Learning

slide-44
SLIDE 44

Myth #3: You need a Data Scientist or Machine Learning expert on staff to use Deep Learning

  • All the models we’ve shown here were trained using tools that are part
  • f our product, Spyglass Visual Inspection.
  • What we’ve seen in practice is that the expertise from customers that

really leads to successfully outcomes is subject matter expertise.

  • To create high-quality labeled datasets, we rely heavily on our

customer’s subject matter experts – this part is more important than the deep learning!

slide-45
SLIDE 45

Deep Learning Myths

1. How to operationalize/deploy? 2. Model maintenance – how to measure drift, and how often to retrain? 3. Change management – shifting data to the center of your quality processes.

Deep Learning Challenges

1. Deep Learning models need to be trained on very large datasets 2. Deep Learning models take a long time to train 3. You need a Data Scientist or Machine Learning expert on staff to use Deep Learning

slide-46
SLIDE 46

Challenges to Deep Learning in Machine Vision

1This is true in traditional machine vision as well, and is often mitigated by carefully controlling the physical environment. Since deep learning models are completely learned from

data, they can be more sensitive to changes in underlying data than traditional Machine Vision approaches.

2Monitoring DL model health remains an area of active research, but a number approaches are effective in practice today, such as monitoring model confidence. 3Quality Assurance/Quality Control

Deployment Model Maintenance Change Management

CHALLENGES

  • Requires high power compute (e.g. GPUs)
  • Many organizations have significant

existing investments in machine vision hardware – rip & replace is often not a viable option

  • Closed systems, and lack of open machine

vision data storage, transfers, and signaling standards.

  • New defect classes arise in production &

need to to be added to models.

  • As products, parts, images, conditions

change over time, Deep Learning (DL) algorithm performance will degrade/drift1

  • Can be difficult to know when model

performance has degraded (most quality processed have no “ground truth” quality measures)

  • The single most important factor in a

successfully Deep Learning deployment is quality labeled training data - DL systems are only as good as the data they’re trained on

  • During implementation & production, the

labeled data must be updated to capture quality experts knowledge. SOLUTIONS

slide-47
SLIDE 47

INDUSTRIAL VISION SYSTEM SVI MACHINE

Edge Container

FACTORY FLOOR CLOUD

ML Model + Deployment Code Human Machine Interface Local Data Storage Optional Storage Optional Scoring

FACTORY EQUIPMENT

Diversion Gates Pick-and-Place Robots Cloud Storage Azure SQL + Blob Cloud Storage Azure SQL + Blob Images + meta data (LAN) Control Signals Monitoring + Alerting Model Training PyTorch Quality Analytics Power BI

Modbus/ Profibus/ Devicenet/ Ethernet/IP

Reporting + Data (MQTT) Local GPU Compute Model Updates

Challenge #1: Deployment

slide-48
SLIDE 48

Challenges to Deep Learning in Machine Vision

1This is true in traditional machine vision as well, and is often mitigated by carefully controlling the physical environment. Since deep learning models are completely learned from

data, they can be more sensitive to changes in underlying data than traditional Machine Vision approaches.

2Monitoring DL model health remains an area of active research, but a number approaches are effective in practice today, such as monitoring model confidence. 3Quality Assurance/Quality Control

Deployment Model Maintenance Change Management

CHALLENGES

  • Requires high power compute (e.g. GPUs)
  • Many organizations have significant

existing investments in machine vision hardware – rip & replace is often not a viable option

  • Closed systems, and lack of open machine

vision data storage, transfers, and signaling standards.

  • New defect classes arise in production &

need to to be added to models.

  • As products, parts, images, conditions

change over time, Deep Learning (DL) algorithm performance will degrade/drift1

  • Can be difficult to know when model

performance has degraded (most quality processed have no “ground truth” quality measures)

  • The single most important factor in a

successfully Deep Learning deployment is quality labeled training data - DL systems are only as good as the data they’re trained on

  • During implementation & production, the

labeled data must be updated to capture quality experts knowledge. SOLUTIONS

  • Deploy models using on-premise using

GPU machines

  • Integrate with existing hardware wherever

feasible via existing communication protocols (e.g. TCP/IP)

slide-49
SLIDE 49

Challenge #2: Model Maintenance & Monitoring

slide-50
SLIDE 50

Challenges to Deep Learning in Machine Vision

1This is true in traditional machine vision as well, and is often mitigated by carefully controlling the physical environment. Since deep learning models are completely learned from

data, they can be more sensitive to changes in underlying data than traditional Machine Vision approaches.

2Monitoring DL model health remains an area of active research, but a number approaches are effective in practice today, such as monitoring model confidence. 3Quality Assurance/Quality Control

Deployment Model Maintenance Change Management

CHALLENGES

  • Requires high power compute (e.g. GPUs)
  • Many organizations have significant

existing investments in machine vision hardware – rip & replace is often not a viable option

  • Closed systems, and lack of open machine

vision data storage, transfers, and signaling standards.

  • New defect classes arise in production &

need to to be added to models.

  • As products, parts, images, conditions

change over time, Deep Learning (DL) algorithm performance will degrade/drift1

  • Can be difficult to know when model

performance has degraded (most quality processed have no “ground truth” quality measures)

  • The single most important factor in a

successfully Deep Learning deployment is quality labeled training data - DL systems are only as good as the data they’re trained on

  • During implementation & production, the

labeled data must be updated to capture quality experts knowledge. SOLUTIONS

  • Deploy models using on-premise using

GPU machines

  • Integrate with existing hardware wherever

feasible via existing communication protocols (e.g. TCP/IP)

  • Leverage cloud infrastructure to monitor

model performance2, across multiple lines/plants as needed

  • Tooling/software to facilitate rapid

labeling, retraining, deployment

  • Perform regular retraining
slide-51
SLIDE 51

Challenges to Deep Learning in Machine Vision

1This is true in traditional machine vision as well, and is often mitigated by carefully controlling the physical environment. Since deep learning models are completely learned from

data, they can be more sensitive to changes in underlying data than traditional Machine Vision approaches.

2Monitoring DL model health remains an area of active research, but a number approaches are effective in practice today, such as monitoring model confidence. 3Quality Assurance/Quality Control

Deployment Model Maintenance Change Management

CHALLENGES

  • Requires high power compute (e.g. GPUs)
  • Many organizations have significant

existing investments in machine vision hardware – rip & replace is often not a viable option

  • Closed systems, and lack of open machine

vision data storage, transfers, and signaling standards.

  • New defect classes arise in production &

need to to be added to models.

  • As products, parts, images, conditions

change over time, Deep Learning (DL) algorithm performance will degrade/drift1

  • Can be difficult to know when model

performance has degraded (most quality processed have no “ground truth” quality measures)

  • The single most important factor in a

successfully Deep Learning deployment is quality labeled training data - DL systems are only as good as the data they’re trained on

  • During implementation & production, the

labeled data must be updated to capture quality experts knowledge. SOLUTIONS

  • Deploy models using on-premise using

GPU machines

  • Integrate with existing hardware wherever

feasible via existing communication protocols (e.g. TCP/IP)

  • Leverage cloud infrastructure to monitor

model performance2, across multiple lines/plants as needed

  • Tooling/software to facilitate rapid

labeling, retraining, deployment

  • Perform regular retraining
  • One way to think about deep learning for

visual inspection is as part of a broad industry-wide shift to more data-driven approaches

  • We recommend thinking of DL as a new

Continuous Improvement QA/QC3 tool - happily, performance improves with the amount of labeled data.

slide-52
SLIDE 52

Deep Learning Myths

1. How to operationalize/deploy? 2. Model maintenance – how to measure drift, and how often to retrain? 3. Change management – shifting data to the center of your quality processes.

Deep Learning Challenges

1. Deep Learning models need to be trained on very large datasets 2. Deep Learning models take a long time to train 3. You need a Data Scientist or Machine Learning expert on staff to use Deep Learning

slide-53
SLIDE 53
slide-54
SLIDE 54

Deep Learning. Delivered.

Our Guaranteed Approach to Visual Inspection

Define Success

The Spyglass team works with you to define your unique vision accuracy requirements.

Supply Images

Provide sets of images of your products that represent acceptable quality as well as images of each class of defect.

Prove it Works

Using supplied images, the Spyglass team builds an AI model demonstrating the success criteria

slide-55
SLIDE 55

Please send detailed questions to stephen.welch@mariner-usa.com For more information please visit https://mariner-usa.com/