labs 4 apis api i lab 1
play

Labs #4 APIs API I Lab #1 Previously, rendering of Guestbook done - PowerPoint PPT Presentation

Labs #4 APIs API I Lab #1 Previously, rendering of Guestbook done in Flask with Jinja templates returning HTML Recall, client-side rendering approaches Front-end UI code completely on browser (e.g. client-side rendering) Back-end


  1. Cloud ud Translat anslate e via a Pyth thon on  Install Cloud Translate package pip3 install --upgrade google-cloud-translate --user  Go to Translate cloud-client code cd ~/python-docs-samples/translate/cloud-client  Examine code Portland State University CS 430P/530 Internet, Web & Cloud Systems

  2.  Run snippets.py on the text string and show output python3 snippets.py translate-text en ' 你有沒有帶外套 ' Portland State University CS 430P/530 Internet, Web & Cloud Systems

  3. Cloud ud Natural tural Lang nguage uage via Python thon  Install Cloud Natural Language package pip3 install --upgrade google-cloud-language --user  Go to Natural Language cloud-client code cd ~/python-docs-samples/language/cloud-client/v1 Portland State University CS 430P/530 Internet, Web & Cloud Systems

  4.  Examine code for entity analysis Portland State University CS 430P/530 Internet, Web & Cloud Systems

  5.  Examine code for sentiment analysis Portland State University CS 430P/530 Internet, Web & Cloud Systems

  6.  Run the entities-text function in snippets.py and show output python snippets.py entities-text  Edit the string used in sentiment_text() and run the script using the following strings and show how the sentiment score varies each time python snippets.py sentiment-text text = 'homework is awful!' text = 'homework is awesome?' text = 'homework is awesome.' text = 'homework is awesome!' Portland State University CS 430P/530 Internet, Web & Cloud Systems

  7. Integration egration  See if words in a recording describe an object in an image  Previous calls modified to return results as text (vs. print)  Audio transcription to translation to NLP to obtain entities  Image analysis to obtain labels  Comparison to determine match In foreign language Portland State University CS 430P/530 Internet, Web & Cloud Systems

  8. Setup etup  Clone the repository git clone https://github.com/googlecodelabs/integrating-ml-apis  In the repository, edit solution.py to use older translate version  Replace from google.cloud import translate  With from google.cloud import translate_v2 as translate Portland State University CS 430P/530 Internet, Web & Cloud Systems

  9.  tr-TR speech samples:  de-DE speech samples:  gs://ml-api-codelab/tr-ball.wav  gs://ml-api-codelab/de-ball.wav  gs://ml-api-codelab/de-bike.wav  gs://ml-api-codelab/tr-bike.wav  gs://ml-api-codelab/de-jacket.wav  gs://ml-api-codelab/tr-jacket.wav  gs://ml-api-codelab/tr-ostrich.wav  gs://ml-api-codelab/de-ostrich.wav Portland State University CS 430P/530 Internet, Web & Cloud Systems

  10. Integration egration  See code for mods to transcribe_gcs() (Speech), translate_text() (Translate), entities_text() (Natural Language) and detect_labels_uri() (Vision) Portland State University CS 430P/530 Internet, Web & Cloud Systems

  11. ML APIs Is Lab #1  Run at least 3 pairs other than the one given in the walk-through python3 solution.py tr-TR gs://ml-api-codelab/tr-ball.wav gs://ml- api-codelab/football.jpg  Integrating Machine Learning APIs (25 min)  https://codelabs.developers.google.com/codelabs/cloud-ml-apis Portland State University CS 430P/530 Internet, Web & Cloud Systems

  12. ML APIs Is Lab #2  Using the (rest of the) Vision API with Python (8 min)  Optical Character Recognition (OCR) (text detection)  Landmark detection  Sentiment analysis (face detection) Portland State University CS 430P/530 Internet, Web & Cloud Systems

  13. Setup etup  Skip Steps 2, 3, and 4. (Re-use setup from ML API Lab #1)  Copy image files into your own bucket gsutil cp gs://cloud-vision-codelab/otter_crossing.jpg gs://$DEVSHELL_PROJECT_ID gsutil cp gs://cloud-vision-codelab/eiffel_tower.jpg gs://$DEVSHELL_PROJECT_ID gsutil cp gs://cloud-vision-codelab/face_surprise.jpg gs://$DEVSHELL_PROJECT_ID gsutil cp gs://cloud-vision-codelab/face_no_surprise.png gs://$DEVSHELL_PROJECT_ID  For the rest of the examples, my project ID is used in the gs:// URIs, use yours instead $ echo $DEVSHELL_PROJECT_ID cs410c-wuchang-201515 Portland State University CS 430P/530 Internet, Web & Cloud Systems

  14.  Make bucket publicly readable via console UI by giving allUsers the Storage Object Viewer role  https://cloud.google.com/storage/docs/access-control/making-data-public Portland State University CS 430P/530 Internet, Web & Cloud Systems

  15. Launch unch interactiv eractive e ipython for lab wuchang@cloudshell:~ (cs430-wuchang-201515)$ ipython … … … In [1]:  Show full image of the Otter Crossing sign via your bucket  Then use Vision's text_detection() to perform an OCR operation on a picture of the sign (substitute your bucket name in the gs:// URL) from google.cloud import vision from google.cloud.vision import types client = vision.ImageAnnotatorClient() image = vision.types.Image() image.source.image_uri = 'gs://cs430-wuchang-201515/otter_crossing.jpg' resp = client.text_detection(image=image) print('\n'.join([d.description for d in resp.text_annotations])) Portland State University CS 430P/530 Internet, Web & Cloud Systems

  16.  Show full Eiffel Tower image in your bucket  Then use Vision's landmark_detection() to identify it of famous places (substitute your bucket name in the gs:// URL) from google.cloud import vision from google.cloud.vision import types client = vision.ImageAnnotatorClient() image = vision.types.Image() image.source.image_uri = 'gs://cs430-wuchang-201515/eiffel_tower.jpg' resp = client.landmark_detection(image=image) print(resp.landmark_annotations) Portland State University CS 430P/530 Internet, Web & Cloud Systems

  17.  Show the two face images in your bucket  Then use Vision's face_detection() to annotate images (substitute your bucket name in the gs:// URL)  See the likelihood of the faces showing surprise from google.cloud import vision from google.cloud.vision import types client = vision.ImageAnnotatorClient() image = vision.types.Image() likelihood_name = ('UNKNOWN', 'VERY_UNLIKELY', 'UNLIKELY', 'POSSIBLE', 'LIKELY', 'VERY_LIKELY') for pic in ('face_surprise.jpg', 'face_no_surprise.png'): image.source.image_uri = 'gs://cs430-wuchang-201515/'+pic resp = client.face_detection(image=image) faces = resp.face_annotations for face in faces: print(pic + ': surprise: {}'.format(likelihood_name[face.surprise_likelihood])) Portland State University CS 430P/530 Internet, Web & Cloud Systems

  18. ML APIs Is Lab #2  Using the Vision API with Python (8 min)  https://codelabs.developers.google.com/codelabs/cloud-vision-api- python Portland State University CS 430P/530 Internet, Web & Cloud Systems

  19. ML APIs Is Lab #3  Video Intelligence API (20 min)  Ensure API is enabled in the API Library Portland State University CS 430P/530 Internet, Web & Cloud Systems

  20. Setup etup cred eden entials tials in Cloud ud Shell ell  In Cloud Shell, create a service account named videolab gcloud iam service-accounts create videolab --display-name "Video Lab"  Create a policy that specifies a role of project viewer and attach it to the service account created in the previous step gcloud projects add-iam-policy-binding ${DEVSHELL_PROJECT_ID} -- member=serviceAccount:videolab@${DEVSHELL_PROJECT_ID}.iam.gserviceaccou nt.com --role roles/viewer  Create and download a service account key in JSON for applications to use in order to take on roles associated with service accounts gcloud iam service-accounts keys create /home/${USER}/videolab.json -- iam-account videolab@${DEVSHELL_PROJECT_ID}.iam.gserviceaccount.com  Set local environment variable that points to file in previous step  Python script will access credentials via this environment variable and file export GOOGLE_APPLICATION_CREDENTIALS="/home/${USER}/videolab.json" Portland State University CS 430P/530 Internet, Web & Cloud Systems

  21. Cloud ud Video eo Intelligence elligence  Video labeling code labels.py cd ~/python-docs-samples/video/cloud-client/labels  Labeling function analyze_labels()  Create client and set features to extract  Call annotate_video with storage location of video  Get result of annotation (allow 90 seconds) # python labels.py gs://cloud-ml-sandbox/video/chicago.mp4 import argparse from google.cloud import videointelligence def analyze_labels(path): video_client = videointelligence.VideoIntelligenceServiceClient() features = [videointelligence.enums.Feature.LABEL_DETECTION] operation = video_client.annotate_video(path, features=features) print('\nProcessing video for label annotations:') result = operation.result(timeout=90) print('\nFinished processing.') Portland State University CS 430P/530 Internet, Web & Cloud Systems

  22.  analyze_labels()  Go through labels returned in JSON  Cycle through labels and print each entity in video and each entity's category  For each entity, output times in video and confidence in detection # first result is retrieved because a single video was processed segment_labels = result.annotation_results[0].segment_label_annotations for i, segment_label in enumerate(segment_labels): print('Video label description: {} '.format(segment_label.entity.description)) for category_entity in segment_label.category_entities: print('\tLabel category description: {} '.format( category_entity.description)) for i, segment in enumerate(segment_label.segments): start_time = (segment.segment.start_time_offset.seconds + segment.segment.start_time_offset.nanos / 1e9) end_time = (segment.segment.end_time_offset.seconds + segment.segment.end_time_offset.nanos / 1e9) positions = ' {} s to {} s'.format(start_time, end_time) confidence = segment.confidence print('\tSegment {} : {} '.format(i, positions)) print('\tConfidence: {} '.format(confidence)) print('\n') Portland State University CS 430P/530 Internet, Web & Cloud Systems

  23.  Setup environment and install packages to run code virtualenv -p python3 env source env/bin/activate pip install -r requirements.txt  Copy the video to a storage bucket  Video from https://youtu.be/k2pBvCtwli8  Also at https://thefengs.com/wuchang/courses/cs430/SportsBloopers2016.mp4 curl https://thefengs.com/wuchang/courses/cs430/SportsBloopers2016.mp4 | gsutil -h "Content-Type:video/mp4" cp - gs://<BUCKET_NAME>/SportsBloopers2016.mp4 Portland State University CS 430P/530 Internet, Web & Cloud Systems

  24.  Run the code to perform the analysis $ python labels.py gs://cs410c-wuchang-201515/SportsBloopers2016.mp4 Processing video for label annotations: Finished processing. Video label description: hockey Label category description: sports Segment 0: 0.0s to 178.9788s Confidence: 0.837484955788 Video label description: sports Segment 0: 0.0s to 178.9788s Confidence: 0.927089214325 Portland State University CS 430P/530 Internet, Web & Cloud Systems

  25.  Watch video and answer the following questions  Which sports did the API properly identify?  Which sports did the API fail to identify?  Upload a short (< 2 min) video of your own to a Cloud Storage bucket and run the label script on it  You can find one on YouTube then use this site to pull it out as an mp4 that can be uploaded to your bucket  https://youtubemp4.to/  Ensure the file in the bucket is publicly readable as before via command-line or web UI  See ML APIs Lab #2  https://cloud.google.com/storage/docs/access-control/making-data-public  Note: if you get a permissions error, you may need to restart Cloud Shell  Answer the following questions  Show an example of a properly identified entity in your video  Show an example of a missed or misclassified entity in your video Portland State University CS 430P/530 Internet, Web & Cloud Systems

  26. Op Opti tional onal (FY FYI)  If you wish to explore more (for example, as a final project), see analyze.py for examples for detecting explicit content in video and labeling shots within a video  shots breaks clips into parts based on camera shots  explicit_content detects adult material $ cd ~/python-docs-samples/video/cloud-client/analyze # python analyze.py labels gs://cloud-ml-sandbox/video/chicago.mp4 # python analyze.py labels_file resources/cat.mp4 # python analyze.py shots gs://demomaker/gbikes_dinosaur.mp4 # python analyze.py explicit_content gs://demomaker/gbikes_dinosaur.mp4 Portland State University CS 430P/530 Internet, Web & Cloud Systems

  27. ML APIs Is Lab #3  Clean-up rm /home/${USER}/videolab.json gcloud iam service-accounts delete videolab@${DEVSHELL_PROJECT_ID}.iam.gserviceaccount.com gcloud projects remove-iam-policy-binding ${DEVSHELL_PROJECT_ID} -- member=serviceAccount:videolab@${DEVSHELL_PROJECT_ID}.iam.gserv iceaccount.com --role=roles/viewer  Video Intelligence API (20 min) Portland State University CS 430P/530 Internet, Web & Cloud Systems

  28. ML APIs Is Lab #4  Deploying a Python Flask Web Application to App Engine (24 min)  Note that the codelab link has you deploy using the flexible environment which is not needed, and is more expensive  We will modify app.yaml to run on standard  Skip steps 2, 3, and 5  Do Step 4 In Cloud Shell cd python-docs-samples/codelabs/flex_and_vision Portland State University CS 430P/530 Internet, Web & Cloud Systems

  29.  Enable APIs (already done most likely) gcloud services enable vision.googleapis.com gcloud services enable storage-component.googleapis.com gcloud services enable datastore.googleapis.com  Create service account for the lab gcloud iam service-accounts create flexvisionlab --display- name "Flex Vision Lab"  Create a policy and attach it to service account to allow access to view and store objects in buckets from application gcloud projects add-iam-policy-binding ${DEVSHELL_PROJECT_ID} -- member serviceAccount:flexvisionlab@${DEVSHELL_PROJECT_ID}.iam.gserviceacco unt.com --role roles/storage.admin Portland State University CS 430P/530 Internet, Web & Cloud Systems

  30.  Create a policy and attach it to service account to allow access to Cloud Datastore from application gcloud projects add-iam-policy-binding ${DEVSHELL_PROJECT_ID} -- member serviceAccount:flexvisionlab@${DEVSHELL_PROJECT_ID}.iam.gserviceacco unt.com --role roles/datastore.user  In IAM, view the roles that have been attached to this service account in the web UI to ensure the roles have been enabled before issuing a key  Create a JSON key file used by application to authenticate itself as the service account gcloud iam service-accounts keys create /home/${USER}/flexvisionlab.json --iam-account flexvisionlab@${DEVSHELL_PROJECT_ID}.iam.gserviceaccount.com  Set your GOOGLE_APPLICATION_CREDENTIALS environment variable to point application to the key export GOOGLE_APPLICATION_CREDENTIALS="/home/${USER}/flexvisionlab.json" Portland State University CS 430P/530 Internet, Web & Cloud Systems

  31.  Set the location of the Cloud Storage bucket for the app's images via an environment variable  If you have deleted your gs://${DEVSHELL_PROJECT_ID} bucket, create it again gsutil mb gs://${DEVSHELL_PROJECT_ID}  Then set the environment variable to point to it export CLOUD_STORAGE_BUCKET=${DEVSHELL_PROJECT_ID} Portland State University CS 430P/530 Internet, Web & Cloud Systems

  32.  Create python3 environment to test locally virtualenv -p python3 env source env/bin/activate pip install -r requirements.txt  Run the app on the dev server  Note: If you get an error, exit the application and wait for IAM credentials to fully propagate python main.py Portland State University CS 430P/530 Internet, Web & Cloud Systems

  33.  Test the app via the web preview or clicking on the link returned by python (http://127.0.0.1:8080)  Upload a photo to detect joy in faces Portland State University CS 430P/530 Internet, Web & Cloud Systems

  34. Code e for def efault ault rout ute @app.route('/') def homepage(): # Create a Cloud Datastore client. datastore_client = datastore.Client() # Use the Cloud Datastore client to fetch # information from Datastore about each photo query = datastore_client.query(kind='Faces') image_entities = list(query.fetch()) # Pass image_entities to Jinja2 template to render return render_template('homepage.html', image_entities=image_entities) Portland State University CS 430P/530 Internet, Web & Cloud Systems

  35. upload_photo()  Code for uploading new images from google.cloud import datastore from google.cloud import storage from google.cloud import vision CLOUD_STORAGE_BUCKET = os.environ.get('CLOUD_STORAGE_BUCKET') @app.route('/upload_photo', methods=['GET', 'POST']) def upload_photo(): photo = request.files['file'] # File from form submission storage_client = storage.Client() # Create storage client. # Get bucket bucket = storage_client.get_bucket(CLOUD_STORAGE_BUCKET) # Create blob to store uploaded content then upload content to it blob = bucket.blob(photo.filename) blob.upload_from_string( photo.read(), content_type=photo.content_type) # Make blob publicly available blob.make_public() Portland State University CS 430P/530 Internet, Web & Cloud Systems

  36.  Code for getting face annotations from Vision API # Create a Cloud Vision client. vision_client = vision.ImageAnnotatorClient() # Use the Cloud Vision client to detect a face for our image. source_uri = 'gs://{}/{}'.format(CLOUD_STORAGE_BUCKET, blob.name) image = vision.types.Image( source=vision.types.ImageSource(gcs_image_uri=source_uri)) faces = vision_client.face_detection(image).face_annotations # If face detected, store likelihood that the face # displays 'joy' based on Vision's annotations if len(faces) > 0: face = faces[0] # Convert the likelihood string. likelihoods = [ 'Unknown', 'Very Unlikely', 'Unlikely', 'Possible', 'Likely', 'Very Likely'] face_joy = likelihoods[face.joy_likelihood] else: face_joy = 'Unknown' Portland State University CS 430P/530 Internet, Web & Cloud Systems

  37.  Code to insert entry into Datastore (including a link to the file in cloud storage) datastore_client = datastore.Client() # Create datastore client. current_datetime = datetime.now() # Fetch current date / time. kind = 'Faces' # Set kind for new entity. name = blob.name # Set name/ID for new entity. # Create the Cloud Datastore key for the new entity. key = datastore_client.key(kind, name) # Construct the new entity using the key as a dictionary # including "face_joy" label from Cloud Vision face detection entity = datastore.Entity(key) entity['blob_name'] = blob.name entity['image_public_url'] = blob.public_url entity['timestamp'] = current_datetime entity['joy'] = face_joy # Save the new entity to Datastore. datastore_client.put(entity) return redirect('/') Portland State University CS 430P/530 Internet, Web & Cloud Systems

  38. App pp En Engi gine ne conf nfig iguration uration  Modify app.yaml (use standard environment with 1 f1-micro , configure storage bucket) #runtime: python #env: flex runtime : python37 env : standard entrypoint : gunicorn -b :$PORT main:app runtime_config: python_version : 3 env_variables: CLOUD_STORAGE_BUCKET : <YOUR_STORAGE_BUCKET > # CLOUD_STORAGE_BUCKET: cs410c-wuchang-201515 manual_scaling: instances : 1 resources: cpu : 1 memory_gb : 0.5 disk_size_gb : 10 Portland State University CS 430P/530 Internet, Web & Cloud Systems

  39. De Deplo ploy y app pp  Deactivate development environment deactivate  Deploy gcloud app deploy  Note the custom container built and pushed into gcr.io to support the flexible environment's deployment onto App Engine  Show application running at  https://<PROJECT_ID>.appspot.com Portland State University CS 430P/530 Internet, Web & Cloud Systems

  40. ML APIs Is Lab #4  Cleanup rm /home/${USER}/flexvisionlab.json gcloud projects remove-iam-policy-binding ${DEVSHELL_PROJECT_ID} -- member=serviceAccount:flexvisionlab@${DEVSHELL_PROJECT_ID}.iam. gserviceaccount.com --role=roles/storage.admin gcloud projects remove-iam-policy-binding ${DEVSHELL_PROJECT_ID} -- member=serviceAccount:flexvisionlab@${DEVSHELL_PROJECT_ID}.iam. gserviceaccount.com --role=roles/datastore.user gcloud iam service-accounts delete flexvisionlab@${DEVSHELL_PROJECT_ID}.iam.gserviceaccount.com  Deploying a Python Flask Web Application to App Engine (24 min)  https://codelabs.developers.google.com/codelabs/cloud-vision-app- engine Portland State University CS 430P/530 Internet, Web & Cloud Systems

  41. Aut utoML ML Lab #1  Upload a dataset of labeled cloud images to Cloud Storage and use AutoML to create a custom model to recognize clouds  Data encoded as a CSV file that contains labels and paths to individual files in the bucket  Goto APIs & Services → Library, search for AutoML and enable the API Portland State University CS 430P/530 Internet, Web & Cloud Systems

  42.  Visit the AutoMLVision console and allow access https://cloud.google.com/automl/ui/vision  Specify your Project ID (cs410c-wuchang-201515), then click "Set Up Now"  Go to Cloud Storage → Browser and ensure the bucket gs://${DEVSHELL_PROJECT_ID}-vcm has been created  Launch Cloud Shell and copy the training set from Google's public storage bucket into yours export BUCKET=${DEVSHELL_PROJECT_ID}-vcm gsutil -m cp -r gs://automl-codelab-clouds/* gs://${BUCKET} Portland State University CS 430P/530 Internet, Web & Cloud Systems

  43.  Refresh bucket to see 3 directories of cloud images of different types  Copy dataset CSV file  Each row in CSV contains URL of image and associated label  Change bucket location to point to your bucket above before copying it to your bucket gsutil cp gs://automl-codelab-metadata/data.csv . sed -i -e "s/placeholder/${BUCKET}/g" ./data.csv gsutil cp ./data.csv gs://${BUCKET} Portland State University CS 430P/530 Internet, Web & Cloud Systems

  44.  Show CSV file in bucket, then open CSV file and show the format of the dataset Portland State University CS 430P/530 Internet, Web & Cloud Systems

  45.  Go back to AutoMLVision console https://cloud.google.com/automl/ui/vision  Create new dataset and specify the CSV file, select Multi-Label classification, then "Create Dataset"  Wait for images to be imported Portland State University CS 430P/530 Internet, Web & Cloud Systems

  46.  Scroll through images to ensure they imported properly  Click on Train, then "Start Training"  AutoML will create a custom model  Go get some coffee  Takes a while to complete  Show the full evaluation of model including the confusion matrix Portland State University CS 430P/530 Internet, Web & Cloud Systems

  47.  Visit the cloud image gallery at UCAR  https://scied.ucar.edu/cloud-image-gallery  Download one image for each type trained and one image that is not any of the three  Click on "Predict" and upload the 4 images  Show the results of prediction Portland State University CS 430P/530 Internet, Web & Cloud Systems

  48. Aut utoML ML Lab #1  https://codelabs.developers.google.com/codelabs/cloud-automl- vision-intro Portland State University CS 430P/530 Internet, Web & Cloud Systems

  49. Firebase Labs

  50. Fi Firebase ebase Lab b #1  Firebase Web Codelab (39 min)  Create project in the Google Firebase Console (different from the Google Cloud Console) https://console.firebase.google.com/  Call it firebaselab-<OdinID> Portland State University CS 430P/530 Internet, Web & Cloud Systems

  51. Reg egist ster er app pp  Click on </> to register a new web app Portland State University CS 430P/530 Internet, Web & Cloud Systems

  52.  Register app, but skip the next steps for including Firebase in your app and continue to console  We will do this in Cloud Shell Portland State University CS 430P/530 Internet, Web & Cloud Systems

  53. En Enab able le us use e of Go Google gle aut uthentication entication  From console, Develop=>Authentication->Sign-In Method  Enable Google account logins for your web app and call it FriendlyChat Portland State University CS 430P/530 Internet, Web & Cloud Systems

  54. En Enab able le rea eal-time time dat atabas abase  From console, Develop=>Database  Scroll down to Cloud Firestore database  Then, Create database  Enable "Start in test mode…" Portland State University CS 430P/530 Internet, Web & Cloud Systems

  55. En Enab able le us use e of Cloud ud Storage age  Note: Bucket is initially wide-open  Develop=>Storage=>Get Started=>Next  Set the storage region to the default Portland State University CS 430P/530 Internet, Web & Cloud Systems

  56. Setup etup code  Goto console.cloud.google.com to find the project created ( firebaselab-<OdinID> )  Visit Compute Engine and enable billing on project Portland State University CS 430P/530 Internet, Web & Cloud Systems

  57. Setup etup code  Launch Cloud Shell  Clone repository git clone https://github.com/firebase/friendlychat-web  Use npm to install the Firebase CLI.  In Cloud Shell cd friendlychat-web/web-start/public npm -g install firebase-tools  To verify that the CLI has been installed correctly, run firebase --version Portland State University CS 430P/530 Internet, Web & Cloud Systems

  58. Install stall th the e Fi Firebase ebase CLI  Authorize the Firebase CLI to deploy app by running firebase login --no-localhost  Visit URL given and login to your pdx.edu account  Note that you may need to cut-and-paste the entire URL given in the console  Allow access Portland State University CS 430P/530 Internet, Web & Cloud Systems

  59.  Get authorization code and paste it in to complete login Portland State University CS 430P/530 Internet, Web & Cloud Systems

  60. Setup etup Fi Firebase ebase for app pp  Make sure you are in the web-start directory then set up Firebase to use your project firebase use --add  Use arrow keys to select your Project ID and follow the instructions given. Portland State University CS 430P/530 Internet, Web & Cloud Systems

  61. Ex Exami mine ne Fi Firebase ebase code de in web eb app pp  Use Cloud Shell's code editor to view index.html  Note that the developer has only added the Firebase components that are used for the app to the page for efficiency edit index.html  Note the inclusion of init.js that is created via the firebase use command and contains the project's Firebase credentials Portland State University CS 430P/530 Internet, Web & Cloud Systems

  62. Run un th the e app pp from m Cloud ud Shell ell  Use Firebase hosting emulator to deliver app locally firebase serve --only hosting Portland State University CS 430P/530 Internet, Web & Cloud Systems

  63. View w run unning ning tes est t app pplication lication  Click on link or go to Web Preview, change port to 5000, and preview  App not fully functional yet Portland State University CS 430P/530 Internet, Web & Cloud Systems

  64. View w your ur credenti edentials als  From web app  View source then,  Click on init.js link  See project credentials  Go back to Cloud Shell and "Control-C" to terminate the server Portland State University CS 430P/530 Internet, Web & Cloud Systems

  65. Part Par t 1: Add d Fi Fireba rebase se Aut uthenti enticat cation ion  In scripts/main.js , modify the signIn function to configure authentication using Google as the identity (OAuth) provider // Signs-in Friendly Chat. function signIn() { // Sign in Firebase w/ popup auth and Google as the identity provider. var provider = new firebase.auth.GoogleAuthProvider(); firebase.auth().signInWithPopup(provider); }  Similarly, set the signOut function just below // Signs-out of Friendly Chat. function signOut() { // Sign out of Firebase. firebase.auth().signOut(); } Portland State University CS 430P/530 Internet, Web & Cloud Systems

  66.  Register a callback function ( authStateObserver ) in initFirebaseAuth that updates the UI whenever the authentication state of a user changes  Function will update the profile photo and name of the (now) authenticated user using data from the OAuth provider (Google) // Initiate firebase auth. function initFirebaseAuth() { // Listen to auth state changes. firebase.auth().onAuthStateChanged(authStateObserver); } Portland State University CS 430P/530 Internet, Web & Cloud Systems

  67.  Implement calls from authStateObserver for getting the profile picture and name from OAuth provider // Returns the signed-in user's profile Pic URL. function getProfilePicUrl() { return firebase.auth().currentUser.photoURL || '/images/profile_placeholder.png'; } // Returns the signed-in user's display name. function getUserName() { return firebase.auth().currentUser.displayName; }  Implement check for login // Returns true if a user is signed-in. function isUserSignedIn() { return !!firebase.auth().currentUser; } Portland State University CS 430P/530 Internet, Web & Cloud Systems

  68.  If you want to test with the development server (i.e. via firebase serve ), you will need to authorize its appspot domain it is served from  Firebase=>Authentication=>Sign-in Method=>Authorized Domains  Note that the domain used on a firebase deploy is enabled by default ( $PROJECT_ID.firebaseapp.com ) firebase serve domain firebase deploy domain Portland State University CS 430P/530 Internet, Web & Cloud Systems

  69.  Ensure that third-party cookies are enabled on your browser  In Chrome=>Settings=>Advanced=>Privacy and Security=>Site Settings=>Cookies and site data=>Block Third Party Cookies (disable setting) Portland State University CS 430P/530 Internet, Web & Cloud Systems

  70. Tes est t Sign gning ing-In In to th the e App  Update app firebase serve --only hosting  Click on link or go to Web Preview and change to 5000  Sign-In with Google  Show that the Google profile pic and name of the user is displayed Portland State University CS 430P/530 Internet, Web & Cloud Systems

  71. Par Part t 2: Impl plem emen ent t me mess ssage ge se sending ding  Update saveMessage to use add() to store messages into real-time database upon "Send" being clicked // Saves a new message to your Cloud Firestore database. function saveMessage(messageText) { // Add a new message entry to the database. return firebase.firestore().collection('messages').add({ name: getUserName(), text: messageText, profilePicUrl: getProfilePicUrl(), timestamp: firebase.firestore.FieldValue.serverTimestamp() }).catch(function(error) { console.error('Error writing new message to database', error); }); } Portland State University CS 430P/530 Internet, Web & Cloud Systems

  72. Par Part t 2: Impl plem emen ent t me mess ssage ge rec eceiving eiving  Modify loadMessages function in main.js  Synchronize messages on the app across clients  Add listeners that trigger when changes are made to data  Listeners update UI element for showing messages.  Only display the last 12 messages of the chat for fast load  (See next slide) Portland State University CS 430P/530 Internet, Web & Cloud Systems

  73. // Loads chat messages history and listens for upcoming ones. function loadMessages() { // Create query to load last 12 messages and listen for new ones. var query = firebase.firestore() .collection('messages') .orderBy('timestamp', 'desc') .limit(12); // Start listening to the query. query.onSnapshot(function(snapshot) { snapshot.docChanges().forEach(function(change) { if (change.type === 'removed') { deleteMessage(change.doc.id); } else { var message = change.doc.data(); displayMessage(change.doc.id, message.timestamp, message.name, message.text, message.profilePicUrl, message.imageUrl); } }); }); } Portland State University CS 430P/530 Internet, Web & Cloud Systems

  74. Tes est  Update your app firebase serve  Sign-in to Google  Click on Message box, type a message and click Send  Message will be inserted into real-time database  UI will automatically update with message and the account profile picture Portland State University CS 430P/530 Internet, Web & Cloud Systems

  75.  Show the message in the database  Note  One can mock up an iOS or Android client version to interoperate (see the two other codelabs)  https://codelabs.developers.google.com/codelabs/firebase-android  https://codelabs.developers.google.com/codelabs/firebase-ios-swift Portland State University CS 430P/530 Internet, Web & Cloud Systems

  76. Tes est t rea eal-time time data atabase base up updat ates es  Go back to Firebase Database web UI to view messages in database  We will manually add a message and it will update the UI in real- time automatically  Click "Add document" Portland State University CS 430P/530 Internet, Web & Cloud Systems

  77.  Click on "Auto ID" for Document ID, then enter the fields for the document  name (string)  Wu  profilePicURL (string)  https://lh3.googleusercontent.com/a- /AAuE7mAaCBS0jz6HPgy_NW_UAlaoETpPoNZHTo2McVTAQQ  text (string)  Pretend the instructor added a message  timestamp (timestamp)  Set to today's date and time Portland State University CS 430P/530 Internet, Web & Cloud Systems

  78. Portland State University CS 430P/530 Internet, Web & Cloud Systems

  79. Par Part t 4: Impl plem emen ent t ima mage ge se sending ding  Update saveImageMessage to store images into real-time database  Initially create message with loading icon  Take file parameter and store in Firebase storage  Get URL for the file in Firebase storage  Update message in step 1 with URL to show image on UI Portland State University CS 430P/530 Internet, Web & Cloud Systems

  80. // Saves a new message containing an image in Firebase. // This first saves the image in Firebase storage. function saveImageMessage(file) { // 1 - We add a message with a loading icon that will get updated with the shared image. firebase.firestore().collection('messages').add({ name: getUserName(), imageUrl: LOADING_IMAGE_URL, profilePicUrl: getProfilePicUrl(), timestamp: firebase.firestore.FieldValue.serverTimestamp() }).then(function(messageRef) { // 2 - Upload the image to Cloud Storage. var filePath = firebase.auth().currentUser.uid + '/' + messageRef.id + '/' + file.name; return firebase.storage().ref(filePath).put(file).then(function(fileSnapshot) { // 3 - Generate a public URL for the file. return fileSnapshot.ref.getDownloadURL().then((url) => { // 4 - Update the chat message placeholder with the image's URL. return messageRef.update({ imageUrl: url, storageUri: fileSnapshot.metadata.fullPath }); }); }); }).catch(function(error) { console.error('There was an error uploading a file to Cloud Storage:', error); }); } Portland State University CS 430P/530 Internet, Web & Cloud Systems

  81. UI UI Portland State University CS 430P/530 Internet, Web & Cloud Systems

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend