mongodb for a high volume logistics application
play

MongoDB for a High Volume Logistics Application Santa Clara, - PowerPoint PPT Presentation

MongoDB for a High Volume Logistics Application Santa Clara, California | April 23th 25th, 2018 about me ... Eric Potvin Software Engineer in the performance team at Shipwire , an Ingram Micro company, in Sunnyvale, California A little


  1. MongoDB for a High Volume Logistics Application Santa Clara, California | April 23th – 25th, 2018

  2. about me ... Eric Potvin Software Engineer in the performance team at Shipwire , an Ingram Micro company, in Sunnyvale, California

  3. … A little background

  4. who are we? We offer a cloud-based fulfillment software platform This platform enables thousands of brands and online retailers to manage their order fulfillment operations We support 20+ warehouses in multiple countries like USA, Canada, Australia, Italy, Germany and China

  5. warehouses are … old-fashioned Some warehouses are unable to easily adapt their systems to new technologies Warehouses are using old infrastructure; aka servers (AS/400) or service providers Warehouses understands files … and FTP

  6. what we have to deal with Millions of files received monthly Gigabytes of various document file types (XML, TXT/CSV, PDF) Limitations on file received (raw zip files vs zip files) Limitations of FTP connections

  7. lots of data to maintain 8 processing servers Ingesting millions of files per month Thousands of log files 100+ GB of monthly logs / 250+ GB of data files

  8. server resources & limitations By manipulating so many files, we are suffering from high server resources consumptions - Lots of processes with constant high CPU usage - Each processes has high RAM usage, - And high network usage - GBs of data transferred hourly

  9. searching for information can be tedious Often, we need to look for data in case of errors or a common “we didn’t receive these files” Data and logs are not available for users Finding information requires an engineer to connect to each server

  10. what about... NFS? This will eliminate the lookup across servers but still have some issues: - Still large amount of files - Network overhead for large files - And … -bash: /bin/ls: Argument list too long MySQL - Changing data structure requires maintenance

  11. … so why did we choose MongoDB?

  12. get all data at no cost? Analytics software are great and allow any user to see data But they can be costly and limited MongoDB gives the flexibility to save what we need With no monthly or setup fee

  13. better integrations All data can now be visible by all users Can be integrated with our in-house applications Self-service tool allow users to take actions immediately in case of issues Accurate real-time tracking of documents Real-time monitoring of documents, server resources

  14. no more frequent reads/writes No more slow CRUD operations on an XML file on disk Avoid millions of disk and memory operations It also make our code healthier …

  15. simplified code From: To: mongoClient.getDatabase(myDatabase) Document doc = db.parse(<my_file>); .getCollection(myCollection) Element elem = doc.getDocumentElement(); .find(search) NodeList nl = elem.getElementsByTagName(<child>); .projection(whatINeed) for(int i = 0; i < nl.getLength(); i++) { NodeList node = (Element)nl.item(i).getElementsByTagName(<tag>); // and update later for(int j = 0; j < node.getLength(); j++) { collection.update(search, dataToUpdate); // fetch data for what I need // and update later

  16. available for everyone and instantly Now all our apps can access MongoDB Microservices can access the same data without delay Data is available instantly, even after multiple manipulations

  17. another ALTER? seriously? ... No more “system under maintenance” because we need to alter a big table No need to care about schema update due to a warehouses updated file And no need to store the entire content in a blob and try to search within

  18. where is my data? Can access data using a “single point of access” ( all depends which secondary I am reading from ) Faster data access with multiple secondaries No more “file locked” … and waiting for unlock ...

  19. server goes down, no big deal Election process is fantastic ! No more “down time” due to single points of failure Easy to expand and/or upgrade

  20. How did we reduce server resource usage?

  21. example of manipulating a single order 1 order from Chicago, USA to Québec City, Canada using an international carrier, 1 product ordered. This requires at least 7 XML files and 3 PDF files to be created

  22. shipping confirmation example This files contains multiple nodes giving details about shipping details - Tracking numbers - Number of boxes shipped - Carrier including details - etc... File size can be up to few Megabytes

  23. nested loops of … O(n*r)? Looping through few Megabytes file is slow - Each loop calls API and update database records What if the process crashed, where to start from? - Manual recovery Constant server monitoring resources

  24. iterations (what we used to have) Open the entire file in memory Loop through each record, For each record loop through each box shipped For each box shipped, Loop through each product (quantity shipped, reason if not shipped)

  25. Enough ! let’s keep this simple: O(1)

  26. no more loops ... Save the data we only care about - Our own standard format using Kilobytes of data Higher efficiency of searching documents - One simple document, one single query

  27. “Stateful” resource keep track of data changes inside the document No more intensive memory and disk usage due to multiple file manipulations Real-time manual change from a UI by any user

  28. Fault tolerant MongoDB gives us persistent data (server reboot, segmentation fault, etc…) Eliminates memory issues when reading multiple large text file in memory Free up resources for other applications running on the same server

  29. server resources This result in very low resource usage processes CPU percentage and load went down drastically Network usage dropped considerably

  30. disk utilization No more -bash: /bin/ls: Argument list too long Lots of free space reused for something else No more frequent “cleanup” or disk maintenance No more file archiving/maintenance to a backup server No more disk at 95% utilization alerts

  31. Let’s see a simple example

  32. Application logs

  33. application logs (what we used to have) Each application logs its data to their own specific files Each log uses different log level based on what is executed CRIT (0), ERR (1), WARN (2), INFO (3), DEBUG (4) Logs are saved with following format in /var/log/my_application/my_app.log 2017-11-12T03:50:02-08:00 [ INFO / 3 ] (PID: 12345): My message

  34. application log (search) To search, we simply need to run: for x in $(seq 1 8); do ssh "p$x.myserver" grep -r "my search" /logs/app/* ; Done … wait … and … wait

  35. no more ! let’s fix this

  36. logging in MongoDB Each application logs its data to their own specific namespace Database used: <application_name> Collection used: <application_specific> Example: warehouse.sending_files

  37. logging in MongoDB (example) {“datetime”: date: ISODate(), “level”:”INFO”, “code”:3, “pid”: 12345, “message”: “file orders_1234.zip sent to /inbound/” }

  38. MongoDB log (search) use logs; db.my_app.find(); db.my_app.find({level: “INFO”}); db.my_app.find({message: /some specific data/);

  39. archiving logs Archiving data can be done by using the TTL index ● Warning: ttl index runs every 60 seconds on all namespaces and records to identify which records needs to be removed. This can slow down data access. Another way is to create a daemon that generates “yearly or monthly” collections. Then, use the mongodump to archive the records.

  40. So … What can MongoDB do for you?

  41. Q+A ?

  42. Thank You!

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend