From Zero to AI Hero Presented by: Kevin - - PDF document

from zero to ai hero
SMART_READER_LITE
LIVE PREVIEW

From Zero to AI Hero Presented by: Kevin - - PDF document

W4 Test Analytics, AI/ ML Wednesday, October 2nd, 2019 11:30 AM From Zero to AI Hero Presented by: Kevin Pyles


slide-1
SLIDE 1

¡ ¡ W4 ¡

Test ¡Analytics, ¡AI/ ¡ML ¡ Wednesday, ¡October ¡2nd, ¡2019 ¡11:30 ¡AM ¡ ¡ ¡ ¡ ¡

From ¡Zero ¡to ¡AI ¡Hero ¡ ¡

Presented ¡by: ¡ ¡ ¡

¡ Kevin ¡ ¡Pyles ¡

¡ Domo, ¡Inc ¡ ¡

Brought ¡to ¡you ¡by: ¡ ¡ ¡ ¡

¡

¡

¡ ¡

888-­‑-­‑-­‑268-­‑-­‑-­‑8770 ¡·√·√ ¡904-­‑-­‑-­‑278-­‑-­‑-­‑0524 ¡-­‑ ¡info@techwell.com ¡-­‑ ¡http://www.starwest.techwell.com/ ¡ ¡ ¡

¡

¡ ¡ ¡

¡

slide-2
SLIDE 2

Kevin ¡ ¡Pyles ¡

¡ Kevin ¡Pyles ¡has ¡been ¡in ¡QA ¡for ¡over ¡ten ¡years ¡and ¡has ¡led ¡local ¡and ¡remote ¡teams ¡ responsible ¡for ¡QA ¡across ¡forty-­‑plus ¡concurrent ¡projects. ¡Kevin’s ¡previous ¡teams ¡ provided ¡web ¡load ¡testing ¡for ¡over ¡600,000 ¡connections/sec ¡and ¡I18N ¡and ¡L10N ¡ testing ¡for ¡over ¡a ¡hundred ¡languages ¡and ¡countries. ¡Kevin ¡recently ¡fell ¡in ¡love ¡with ¡ artificial ¡intelligence ¡and ¡Python ¡and ¡has ¡since ¡been ¡promoting ¡AI ¡as ¡the ¡future ¡of ¡ testing, ¡whether ¡through ¡homegrown ¡or ¡commercial ¡services. ¡Follow ¡Kevin ¡on ¡ Twitter ¡@pyleskevin. ¡ ¡

slide-3
SLIDE 3

1 for i in range(m): Θj = Θj-α(ŷi-yi)xi

j

Stochastic Gradient Descent

ŷ = a * x + b

Linear Regression Cost Function (J) of Linear Regression

J = 1 n Σ

n i=1

(predi – yi)2 P(c|x)=

Naïve Bayes

P(x|c)P(c) P(x)

P(c|X) = P(x1|c)×P(x2|c)

ŷ = b_1*X_1 + b_2*X_2 + b_3*X_3 + a

Multiple Linear Regression Perception Learning Rule

wi,j

(next step)= wi,j+n(yj-ŷj)xi

Root Mean Square Error

SE(X,h) = 1 m Σ

m i=1

(h(x(i))i – y(i))2

Logistic Regression

F(x)= ___1___ 1 + e-x

from ZERO to AI HERO

by Kevin Pyles

ŷ = a * x + b

Linear Regression

Our journey begins…

slide-4
SLIDE 4

2

Thoughts after this presentation:

  • What is the future of testing?
  • Will AI take over our jobs?
  • What am I going to do about it?
  • What AI can do…
slide-5
SLIDE 5

3

Jump in!

slide-6
SLIDE 6

4

ML Recommendation System

import numpy as np import pandas as pd import matrix_factorization_utilities # Load user ratings raw_dataset_df = pd.read_csv('movie_ratings_data_set.csv') # Load movie titles movies_df = pd.read_csv('movies.csv', index_col='movie_id') # Convert the running list of user ratings into a matrix ratings_df = pd.pivot_table(raw_dataset_df, index='user_id’, columns='movie_id’, aggfunc=np.max) # Apply matrix factorization to find the latent features U, M = matrix_factorization_utilities.low_rank_matrix_factorization(ratings_df.as_matrix(), num_features=15, regularization_amount=0.1)

slide-7
SLIDE 7

5

Is this Python? Now that’s a Panda!

slide-8
SLIDE 8

6

Now that’s the Matrix!

slide-9
SLIDE 9

7

Issues learning AI

  • Code
  • Python
  • AI/Machine Learning/Data Science
  • Automation
  • Data
slide-10
SLIDE 10

8

Never Give Up!

What should we do next?

slide-11
SLIDE 11

9

Bring AI to Testing

The Learning Path

Machine Learning Artificial Intelligence Data Science Automation

slide-12
SLIDE 12

10 for i in range(m): Θj = Θj-α(ŷi-yi)xi

j

Stochastic Gradient Descent

ŷ = a * x + b

Linear Regression Cost Function (J) of Linear Regression

J = 1 n Σ

n i=1

(predi – yi)2 P(c|x)=

Naïve Bayes

P(x|c)P(c) P(x)

P(c|X) = P(x1|c)×P(x2|c)

ŷ = b_1*X_1 + b_2*X_2 + b_3*X_3 + a

Multiple Linear Regression Perception Learning Rule

wi,j

(next step)= wi,j+n(yj-ŷj)xi

Root Mean Square Error

SE(X,h) = 1 m Σ

m i=1

(h(x(i))i – y(i))2

Logistic Regression

F(x)= ___1___ 1 + e-x ŷ = a * x + b

Linear Regression

Automation Our Path to Automation

  • Python
  • Demo, demo, demo
  • Start simple
  • Add more tests
slide-13
SLIDE 13

11

Quick Plug for Python

  • Python #1 for AI
  • 41.7% of Developers
  • Beginner Friendly

How to Learn Python

  • Read book and complete exercises
  • Work on real world projects (test automation)
  • Demo to team
slide-14
SLIDE 14

12

Why Beginning Python?

  • 1. We are beginners
  • 2. Someone else

recommended it.

  • 3. We wanted to be

Professionals J

Automation: Keys to Success

  • Start simple
  • Even more simple
  • Go one step at a time
slide-15
SLIDE 15

13

Selenium: First Script

from selenium import webdriver #Open a browser driver = webdriver.Chrome() #Go to Google.com driver.get(‘https://www.google.com’) for i in range(m): Θj = Θj-α(ŷi-yi)xi

j

Stochastic Gradient Descent

ŷ = a * x + b

Linear Regression Cost Function (J) of Linear Regression

J = 1 n Σ

n i=1

(predi – yi)2 P(c|x)=

Naïve Bayes

P(x|c)P(c) P(x)

P(c|X) = P(x1|c)×P(x2|c)

ŷ = b_1*X_1 + b_2*X_2 + b_3*X_3 + a

Multiple Linear Regression Perception Learning Rule

wi,j

(next step)= wi,j+n(yj-ŷj)xi

Root Mean Square Error

SE(X,h) = 1 m Σ

m i=1

(h(x(i))i – y(i))2

Logistic Regression

F(x)= ___1___ 1 + e-x ŷ = a * x + b

Linear Regression

Data Science

slide-16
SLIDE 16

14

AI, ML, Data Science, Oh my!

Artificial Intelligence Data Science Machine Learning Data Analytics

How to learn Data Science

  • AI for Everyone on Coursera
  • Data Analysis w/ Python 3 and Pandas
  • Read Data Science from Scratch
  • Linear Algebra from Khan Academy
slide-17
SLIDE 17

15

Data science is more than charts

  • In this Book

– Python – Linear Algebra – Statistics & Probability – Data – Linear Regression – Neural Networks – And lots more!

Data, data, data

  • Get Data!

– Test Cases – Tickets i.e. Jira – Code i.e. Git

slide-18
SLIDE 18

16

Example:

  • What Test Cases have NEVER failed?
  • Could we remove some of them?

Chart of Test Cases Pass/Fail

slide-19
SLIDE 19

17

Chart of Test Cases Pass/Fail Additional Questions

  • What else could you do with this data?
  • Where are your bugs clustered?
  • What bugs aren’t getting fixed?
  • What code is most buggy?
  • What is your bug to feature rate?
slide-20
SLIDE 20

18

Share your findings!

  • Charts are fun to share!
  • We shared and immediately deleted over 900

test cases! Wow!

  • But what about Quality?
  • Quality didn’t go down! More Data Analytics

J

for i in range(m): Θj = Θj-α(ŷi-yi)xi

j

Stochastic Gradient Descent

ŷ = a * x + b

Linear Regression Cost Function (J) of Linear Regression

J = 1 n Σ

n i=1

(predi – yi)2 P(c|x)=

Naïve Bayes

P(x|c)P(c) P(x)

P(c|X) = P(x1|c)×P(x2|c)

ŷ = b_1*X_1 + b_2*X_2 + b_3*X_3 + a

Multiple Linear Regression Perception Learning Rule

wi,j

(next step)= wi,j+n(yj-ŷj)xi

Root Mean Square Error

SE(X,h) = 1 m Σ

m i=1

(h(x(i))i – y(i))2

Logistic Regression

F(x)= ___1___ 1 + e-x ŷ = a * x + b

Linear Regression

Machine Learning

slide-21
SLIDE 21

19

ML Learning Plan

  • Complete MachineLearningMastery.com courses
  • Practical Machine Learning Tutorial with Python
  • Read Hands-On Machine Learning with Scikit-

Learn and TensorFlow

  • Complete AI for Element Selection from

TestAutomationU

Hands-On Machine Learning

  • In this Book

– Classification – Training models – Vector Machines – Decision Trees – TensorFlow – Convolutional Neural Nets – Recurrent Neural Nets – Reinforcement Learning – Yeah…there’s more in there…

slide-22
SLIDE 22

20

How can ML help with Automation?

Fuzzy Screenshot Comparison

Problem: Image Comparison Tools require exact matches. i.e. Pixel = Pixel Solution: Fuzzy Comparison using Machine Learning Generate screenshots, as many as you want Test Case will then create a screenshot and compare against previous screenshots. We even use black and white because we don’t care about the color. If there is a 99% match that is close enough.

slide-23
SLIDE 23

21

Code

Class ImageCompare(): def compare_images(self, image_array_1, image_array_2): #convert the images to grayscale grayA = cv2.cvtColor(image_array_1[:], cv2.COLOR_BGR2GRAY) grayB = cv2.cvtColor(image_array_2[:], cv2.COLOR_BGR2GRAY) #computer the Structure Similarity Index (SSIM) between the two #images, ensuring that the difference between the images is returned (score, diff) = compare_ssim(grayA, grayB, full=True) diff = (diff * 255).astype(“uint8”) print(“SSIM: {}”.format(score)) return score . . .

AI Bug Detected

slide-24
SLIDE 24

22

AI Changes Detected

for i in range(m): Θj = Θj-α(ŷi-yi)xi

j

Stochastic Gradient Descent

ŷ = a * x + b

Linear Regression Cost Function (J) of Linear Regression

J = 1 n Σ

n i=1

(predi – yi)2 P(c|x)=

Naïve Bayes

P(x|c)P(c) P(x)

P(c|X) = P(x1|c)×P(x2|c)

ŷ = b_1*X_1 + b_2*X_2 + b_3*X_3 + a

Multiple Linear Regression Perception Learning Rule

wi,j

(next step)= wi,j+n(yj-ŷj)xi

Root Mean Square Error

SE(X,h) = 1 m Σ

m i=1

(h(x(i))i – y(i))2

Logistic Regression

F(x)= ___1___ 1 + e-x ŷ = a * x + b

Linear Regression

Artificial Intelligence

slide-25
SLIDE 25

23

AI Learning Plan

  • Coursera Deep Learning Specialization
  • Twitter Sentiment Analysis – Learn Python for

Data Science #2

How can AI help with testing?

slide-26
SLIDE 26

24

Sentiment Analysis

  • Sentiment Analysis is a field of Natural

Language Processing (NLP) that builds models that try to identify and classify attributes of the expression Customer Feedback Positive or Negative?

slide-27
SLIDE 27

25

Bring AI to Testing

The Learning Path

Machine Learning Artificial Intelligence Data Science Automation

At some point, everything’s going to go south on you. You’re going to say, “This is it. This is how I end.” Now you can either accept that, or you can get to work. You solve one problem, then you solve the next problem, and the next, if and only if you solve enough problems, you get to go home. Mark Watney, The Martian

slide-28
SLIDE 28

26

Thank you.

@pyleskevin www.linkedin.com/in/kevin-pyles